this post was submitted on 21 May 2024
510 points (95.4% liked)
Technology
59589 readers
2838 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Article title is a bit misleading. Just glancing through I see he texted at least one minor in regards to this and distributed those generated pics in a few places. Putting it all together, yeah, arrest is kind of a no-brainer. Ethics of generating csam is the same as drawing it pretty much. Not much we can do about it aside from education.
Lemmy really needs to stop justifying CP. We can absolutely do more than "eDuCaTiOn". AI is created by humans, the training data is gathered by humans, it needs regulation like any other industry.
It's absolutely insane to me how laissez-fair some people are about AI, it's like a cult.
While I agree with your attitude, the whole 'laissez-fair' thing is probably a misunderstanding:
There is nothing we can do to stop the AI.
Nothing.
The genie is out of the bottle, the Pandora's box has been opened, everything is out and it won't ever return. The world will never be the same, and it's irrelevant what people think.
That's why we need to better understand the post-AI world we created, and figure out what do to now.
Also, to hell with CP. (feels weird to use the word 'fuck' here)
Thats not the question, the question is not "can we stop AI entirely" it's about regulating its development and yes, we can make efforts to do that.
This attitude of "it's inevitable, can't do anything about it" is eerily similar logic to what is used in climate denial and other right-wing efforts. It's a really poor attitude to have, especially about something as consequential as AI.
We have the best opportunity right now to create rules about its uses and development. The answer is not "do nothing" as if it's some force of nature, as opposed toa tool created by humans.
I hear you, and I don't necessarily disagree with you, I just know that's not how anything works.
Regulations work for big companies, but there isn't a big company behind this specific case. And those small-time users have run away and you can't stop them.
It's like trying to regulate cameras to not store specific images. Like, I get the sentiment, but sorry, no. It's not that I would not like that, it's just not possible.
This argument could be applied to anything though. A lot of people get away with myrder, we should still try and do what we can to stop it from happening.
You can't sit in every car and force people to wear a seatbelt, we still have seatbelt laws and regulations for manufacturers.
Physical things are much easier to regulate than software, much less serverless.
We already regulate certain images, and it matters very little.
The bigger payoff will be from educating the public and accepting that we can't win every war.
So accept defeat from the start, that's really just a non-starter. AI models run on hardware, they are developed by specific people, their contents are distributed by specific individuals, code bases are hosted on hardware and on specific outlets.
It really does sound like you're just trying to make excuses to avoid regulation, not that you genuinely have a good reason to think it's not possible to try.
Dude the amount of open source, untrackable, distributed ai models is off the charts. This isn't just about the models offered by subscription from the big players.
This is still one of the weaker arguments. There is a lot of malware out there too, people are still prosecuted when they're caught developing and distributing it, we don't just throw up our hands and pretend there's nothing that can be done.
Like, yeah, some pedophile who also happens to be tech saavy might build his own AI model to make CP, that's not some self-evident argument against attempting to stop them.
No, like, the tools to do these things are common and readily available. It's not malware, it's generalized ai tools, completely embroiled with non image ai work.
Pandora's box is wide open. All of this work can be done trivially, completely offline with a basic PC. Anyone motivated can be offline and up and running in a weekend
You're asking to outlaw something like a spreadsheet.
You download a general purpose image ai model, then train and prompt it completely offline
The models used are not trained on CP. The models weight are distributed freely and anybody can train a LORA on his computer. Its already too late to ban open weight models.