this post was submitted on 21 Apr 2025
123 points (94.9% liked)

Technology

69154 readers
3124 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

There are two parallel image channels that dominate our daily visual consumption. In one, there are real pictures and footage of the world as it is: politics, sport, news and entertainment. In the other is AI slop, low-quality content with minimal human input. Some of it is banal and pointless – cartoonish images of celebrities, fantasy landscapes, anthropomorphised animals. And some is a sort of pornified display of women just simply … being, like a virtual girlfriend you cannot truly interact with. The range and scale of the content is staggering, and infiltrates everything from social media timelines to messages circulated on WhatsApp. The result is not just a blurring of reality, but a distortion of it.

A new genre of AI slop is rightwing political fantasy. There are entire YouTube videos of made-up scenarios in which Trump officials prevail against liberal forces. The White House account on X jumped on a trend of creating images in Studio Ghibli style and posted an image of a Dominican woman in tears as she is arrested by Immigration and Customs Enforcement (Ice). AI political memefare has, in fact, gone global. Chinese AI videos mocking overweight US workers on assembly lines after the tariff announcement raised a question for, and response from, the White House spokesperson last week. The videos, she said, were made by those who “do not see the potential of the American worker”. And to prove how pervasive AI slop is, I had to triple-check that even that response was not itself quickly cobbled-together AI content fabricating another dunk on Trump’s enemies.

The impulse behind this politicisation of AI is not new; it is simply an extension of traditional propaganda. What is new is how democratised and ubiquitous it has become, and how it involves no real people or the physical constraints of real life, therefore providing an infinite number of fictional scenarios.

you are viewing a single comment's thread
view the rest of the comments
[–] reddig33@lemmy.world 5 points 16 hours ago (6 children)

Still waiting on an enterprising social network to start detecting photoshopped and AI-generated images, and badging them as such. If AI can generate this slop, surely it can be used to detect it.

[–] Blaster_M@lemmy.world 12 points 12 hours ago

AI auto detection is a coin toss at best - and it often hurts real artists/writers when they're false positive'd, which is far too often.

People didn't magically forget how to proofread or use Photoshop (or similar tools) when AI generation became popular - the good fakes are just better now. Small "tells" can be fixed in post by anyone that has previously done "photoshopping" work... the kind of people who have already been doing such things before AI generation. AI generation just makes it easier and faster if you know how to use inpainting or img to img.

[–] BartyDeCanter@lemmy.sdf.org 26 points 16 hours ago

It's a cat and mouse game, at best. If you have a tool that can reliably detect AI slop, then that tool can be used as part of the training process to fool the detection tool.

[–] dumbass@leminal.space 5 points 11 hours ago

You can trick those AI image detectors by slightly lowering the image quality.

[–] CarbonIceDragon@pawb.social 8 points 16 hours ago

For how long though? The issue with detecting AI generated stuff, Id imagine, is that a picture contains a finite amount of information, especially a digital one. These things have been improving relatively quickly, and I cant think of any fundamental reason why one could not eventually create images where every pixel is as it would be if that image were real, or at least close enough that detection is not even theoretically possible if you dont have some actual proof that the event depicted couldnt have happened. We may not be there yet, but the closer we get to it, the more prone to error and therefore less useful any detection algorithm must be.

[–] desktop_user@lemmy.blahaj.zone 3 points 14 hours ago

yes, however avoiding that has been part of the training process of many machine learning models for years as a convenient way of preventing overfitting to the training dara.

[–] Rhaedas@fedia.io 3 points 16 hours ago

AI certainly can be a tool to combat it. Such things should have been hardcoded within these neural nets to have some type of watermarking way before it became a problem, but now as far as it's gone and in the open, it's a bit too late for that remedy.

But when tools are put out to detect what is and isn't AI, trust will develop in THOSE AI systems, and then they could be manipulated to claim actual real events aren't true. The real problem is that the humans in all of this from the beginning are losing their ability to critically examine and verify what they're being shown. I.e., people are gullible, always have been to a point, but are at the height now of believing anything they're told without question.