this post was submitted on 21 May 2024
510 points (95.4% liked)

Technology

59963 readers
3481 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] DmMacniel@feddit.de 164 points 7 months ago (33 children)

Mhm I have mixed feelings about this. I know that this entire thing is fucked up but isn't it better to have generated stuff than having actual stuff that involved actual children?

[–] pavnilschanda@lemmy.world 111 points 7 months ago (24 children)

A problem that I see getting brought up is that generated AI images makes it harder to notice photos of actual victims, making it harder to locate and save them

load more comments (24 replies)
[–] retrospectology@lemmy.world 71 points 7 months ago (3 children)

The arrest is only a positive. Allowing pedophiles to create AI CP is not a victimless crime. As others point out it muddies the water for CP of real children, but it also potentially would allow pedophiles easier ways to network in the open (if the images are legal they can easily be platformed and advertised), and networking between abusers absolutely emboldens them and results in more abuse.

As a society we should never allow the normalization of sexualizing children.

[–] nexguy@lemmy.world 39 points 7 months ago (2 children)

Interesting. What do you think about drawn images? Is there a limit to how will the artist can be at drawing/painting? Stick figures vs life like paintings. Interesting line to consider.

[–] retrospectology@lemmy.world 17 points 7 months ago (1 children)

If it was photoreal and difficult to distinguish from real photos? Yes, it's exactly the same.

And even if it's not photo real, communities that form around drawn child porn are toxic and dangerous as well. Sexualizing children is something I am 100% against.

load more comments (1 replies)
load more comments (1 replies)
[–] NewNewAccount@lemmy.world 28 points 7 months ago (8 children)

networking between abusers absolutely emboldens them and results in more abuse.

Is this proven or a common sense claim you’re making?

load more comments (8 replies)
[–] lily33@lemm.ee 18 points 7 months ago* (last edited 6 months ago) (2 children)

Actually, that's not quite as clear.

The conventional wisdom used to be, (normal) porn makes people more likely to commit sexual abuse (in general). Then scientists decided to look into that. Slowly, over time, they've become more and more convinced that (normal) porn availability in fact reduces sexual assault.

I don't see an obvious reason why it should be different in case of CP, now that it can be generated.

load more comments (2 replies)
[–] Catoblepas@lemmy.blahaj.zone 28 points 7 months ago (3 children)

Did we memory hole the whole ‘known CSAM in training data’ thing that happened a while back? When you’re vacuuming up the internet you’re going to wind up with the nasty stuff, too. Even if it’s not a pixel by pixel match of the photo it was trained on, there’s a non-zero chance that what it’s generating is based off actual CSAM. Which is really just laundering CSAM.

[–] Ragdoll_X@lemmy.world 31 points 7 months ago (1 children)

IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren't actually available in the dataset because they had already been removed from the internet.

Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.

load more comments (1 replies)
load more comments (2 replies)
[–] PM_Your_Nudes_Please@lemmy.world 26 points 7 months ago (10 children)

Yeah, it’s very similar to the “is loli porn unethical” debate. No victim, it could supposedly help reduce actual CSAM consumption, etc… But it’s icky so many people still think it should be illegal.

There are two big differences between AI and loli though. The first is that AI would supposedly be trained with CSAM to be able to generate it. An artist can create loli porn without actually using CSAM references. The second difference is that AI is much much easier for the layman to create. It doesn’t take years of practice to be able to create passable porn. Anyone with a decent GPU can spin up a local instance, and be generating within a few hours.

In my mind, the former difference is much more impactful than the latter. AI becoming easier to access is likely inevitable, so combatting it now is likely only delaying the inevitable. But if that AI is trained on CSAM, it is inherently unethical to use.

Whether that makes the porn generated by it unethical by extension is still difficult to decide though, because if artists hate AI, then CSAM producers likely do too. Artists are worried AI will put them out of business, but then couldn’t the same be said about CSAM producers? If AI has the potential to run CSAM producers out of business, then it would be a net positive in the long term, even if the images being created in the short term are unethical.

[–] Ookami38@sh.itjust.works 23 points 7 months ago (3 children)

Just a point of clarity, an AI model capable of generating csam doesn't necessarily have to be trained on csam.

load more comments (3 replies)
load more comments (9 replies)
load more comments (29 replies)
[–] helpImTrappedOnline@lemmy.world 151 points 7 months ago* (last edited 7 months ago) (6 children)

The headline/title needs to be extended to include the rest of the sentence

"and then sent them to a minor"

Yes, this sicko needs to be punished. Any attempt to make him the victim of " the big bad government" is manipulative at best.

Edit: made the quote bigger for better visibility.

[–] cley_faye@lemmy.world 49 points 7 months ago

That's a very important distinction. While the first part is, to put it lightly, bad, I don't really care what people do on their own. Getting real people involved, and minor at that? Big no-no.

[–] DarkThoughts@fedia.io 26 points 7 months ago (1 children)

All LLM headlines are like this to fuel the ongoing hysteria about the tech. It's really annoying.

load more comments (1 replies)
load more comments (4 replies)
[–] NeoNachtwaechter@lemmy.world 96 points 7 months ago (12 children)

Bad title.

They caught him not simply for creating pics, but also for trading such pics etc.

load more comments (12 replies)
[–] peanuts4life@lemmy.blahaj.zone 61 points 7 months ago (26 children)

It's worth mentioning that in this instance the guy did send porn to a minor. This isn't exactly a cut and dry, "guy used stable diffusion wrong" case. He was distributing it and grooming a kid.

The major concern to me, is that there isn't really any guidance from the FBI on what you can and can't do, which may lead to some big issues.

For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, "artistic" styles, but they can generate semi realistic images.

Now, let's say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let's say the FBI cast a wide net and begins surveillance of novelai's userbase.

Is every person who goes on there and types, "Loli" or "Anya from spy x family, realistic, NSFW" (that's an underaged character) going to get a letter in the mail from the FBI? I feel like it's within the realm of possibility. What about "teen girls gone wild, NSFW?" Or "young man, no facial body hair, naked, NSFW?"

This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It's a dangerous mix, and throws the whole enterprise into question.

load more comments (26 replies)
[–] SeattleRain@lemmy.world 44 points 7 months ago* (last edited 7 months ago) (1 children)

America has some of the most militant anti pedophilic culture in the world but they far and away have the highest rates of child sexual assault.

I think AI is going to revel is how deeply hypocritical Americans are on this issue. You have gigantic institutions like churches committing industrial scale victimization yet you won't find a 1/10th of the righteous indignation against other organized religions where there is just as much evidence it is happening as you will regarding one person producing images that don't actually hurt anyone.

It's pretty clear by how staggering a rate of child abuse that occurs in the states that Americans are just using child victims as weaponized politicalization (it's next to impossible to convincingly fight off pedo accusations if you're being mobbed) and aren't actually interested in fighting pedophilia.

[–] kandoh@reddthat.com 26 points 7 months ago (1 children)

Most states will let grown men marry children as young as 14. There is a special carve out for Christian pedophiles.

load more comments (1 replies)
[–] UnpluggedFridge@lemmy.world 41 points 7 months ago (8 children)

These cases are interesting tests of our first amendment rights. "Real" CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.

Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.

So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for "real" images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?

We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.

A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as "real," we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.

load more comments (8 replies)
[–] Kedly@lemm.ee 40 points 6 months ago* (last edited 6 months ago) (8 children)

Ah yes, more bait articles rising to the top of Lemmy. The guy was arrested for grooming, he was sending these images to a minor. Outside of Digg, anyone have any suggestions for an alternative to Lemmy and Reddit? Lemmy's moderation quality is shit, I think I'm starting to figure out where I lean on the success of my experimental stay with Lemmy

Edit: Oh god, I actually checked digg out after posting this and the site design makes it look like you're actually scrolling through all of the ads at the bottom of a bulshit clickbait article

load more comments (8 replies)
[–] Glass0448@lemmy.today 40 points 7 months ago (27 children)

OMG. Every other post is saying their disgusted about the images part but it's a grey area, but he's definitely in trouble for contacting a minor.

Cartoon CSAM is illegal in the United States. AI images of CSAM fall into that category. It was illegal for him to make the images in the first place BEFORE he started sending them to a minor.

https://www.thefederalcriminalattorneys.com/possession-of-lolicon

https://en.wikipedia.org/wiki/PROTECT_Act_of_2003

[–] Madison420@lemmy.world 23 points 7 months ago (7 children)

Yeah that's toothless. They decided there is no particular way to age a cartoon, they could be from another planet that simply seem younger but are in actuality older.

It's bunk, let them draw or generate whatever they want, totally fictional events and people are fair game and quite honestly I'd Rather they stay active doing that then get active actually abusing children.

Outlaw shibari and I guarantee you'd have multiple serial killers btk-ing some unlucky souls.

[–] sugar_in_your_tea@sh.itjust.works 21 points 7 months ago (39 children)

Exactly. If you can't name a victim, it shouldn't be illegal.

load more comments (39 replies)
load more comments (6 replies)
[–] surewhynotlem@lemmy.world 17 points 7 months ago (6 children)

Would Lisa Simpson be 8 years old, or 43 because the Simpsons started in 1989?

load more comments (6 replies)
load more comments (25 replies)
[–] horncorn@lemmynsfw.com 37 points 7 months ago (19 children)

Article title is a bit misleading. Just glancing through I see he texted at least one minor in regards to this and distributed those generated pics in a few places. Putting it all together, yeah, arrest is kind of a no-brainer. Ethics of generating csam is the same as drawing it pretty much. Not much we can do about it aside from education.

load more comments (19 replies)
[–] Greg@lemmy.ca 29 points 7 months ago (9 children)

This is tough, the goal should be to reduce child abuse. It's unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don't abuse children. Like everything else AI, we won't know the real impact for many years.

load more comments (9 replies)
[–] TheObviousSolution@lemm.ee 28 points 6 months ago* (last edited 6 months ago) (6 children)

He then allegedly communicated with a 15-year-old boy, describing his process for creating the images, and sent him several of the AI generated images of minors through Instagram direct messages. In some of the messages, Anderegg told Instagram users that he uses Telegram to distribute AI-generated CSAM. “He actively cultivated an online community of like-minded offenders—through Instagram and Telegram—in which he could show off his obscene depictions of minors and discuss with these other offenders their shared sexual interest in children,” the court records allege. “Put differently, he used these GenAI images to attract other offenders who could normalize and validate his sexual interest in children while simultaneously fueling these offenders’ interest—and his own—in seeing minors being sexually abused.”

I think the fact that he was promoting child sexual abuse and was communicating with children and creating communities with them to distribute the content is the most damning thing, regardless of people's take on the matter.

Umm ... That AI generated hentai on the page of the same article, though ... Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.

load more comments (6 replies)
[–] crazyminner@lemmy.ml 24 points 7 months ago* (last edited 7 months ago) (7 children)

I had an idea when these first AI image generators started gaining traction. Flood the CSAM market with AI generated images( good enough that you can't tell them apart.) In theory this would put the actual creators of CSAM out of business, thus saving a lot of children from the trauma.

Most people down vote the idea on their gut reaction tho.

Looks like they might do it on their own.

load more comments (7 replies)
[–] badbytes@lemmy.world 20 points 7 months ago (4 children)

Breaking news: Paint made illegal, cause some moron painted something stupid.

load more comments (4 replies)
[–] mightyfoolish@lemmy.world 15 points 6 months ago (3 children)

Does this mean the AI was trained on CP material? How else would it know how to do this?

[–] deathbird@mander.xyz 33 points 6 months ago (8 children)

It would not need to be trained on CP. It would just need to know what human bodies can look like and what sex is.

AIs usually try not to allow certain content to be produced, but it seems people are always finding ways to work around those safeguards.

load more comments (8 replies)
load more comments (2 replies)
load more comments
view more: next ›