this post was submitted on 27 Feb 2025
686 points (98.2% liked)

Technology

69865 readers
3053 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MoonlightFox@lemmy.world 16 points 2 months ago (18 children)

It can generate combinations of things that it is not trained on, so not necessarily a victim. But of course there might be something in there, I won't deny that.

However the act of generating something does not create a new victim unless there is someones likeness and it is shared? Or is there something ethical here, that I am missing?

(Yes, all current AI is basically collective piracy of everyones IP, but besides that)

[–] surewhynotlem@lemmy.world 2 points 2 months ago (16 children)

Watching videos of rape doesn't create a new victim. But we consider it additional abuse of an existing victim.

So take that video and modify it a bit. Color correct or something. That's still abuse, right?

So the question is, at what point in modifying the video does it become not abuse? When you can't recognize the person? But I think simply blurring the face wouldn't suffice. So when?

That's the gray area. AI is trained on images of abuse (we know it's in there somewhere). So at what point can we say the modified images are okay because the abused person has been removed enough from the data?

I can't make that call. And because I can't make that call, I can't support the concept.

[–] KairuByte@lemmy.dbzer0.com 9 points 2 months ago (3 children)

I mean, there’s another side to this.

Assume you have exacting control of training data. You give it consensual sexual play, including rough play, bdsm play, and cnc play. We are 100% certain the content is consensual in this hypothetical.

Is the output a grey area, even if it seems like real rape?

Now another hypothetical. A person closes their eyes and imagines raping someone. “Real” rape. Is that a grey area?

Let’s build on that. Let’s say this person is a talented artist, and they draw out their imagined rape scene, which we are 100% certain is a non-consensual scene imagined by the artist. Is this a grey area?

We can build on that further. What if they take the time to animate this scene? Is that a grey area?

When does the above cross into a problem? Is it the AI making something that seems like rape but is built on consensual content? The thought of a person imagining a real rape? The putting of that thought onto a still image? The animating?

Or is it none of them?

[–] Clent@lemmy.dbzer0.com 1 points 2 months ago

We already allow simulated rape in tv and movies. AI simply allows a more graphical portrayal.

load more comments (2 replies)
load more comments (14 replies)
load more comments (15 replies)