this post was submitted on 08 Jun 2024
361 points (97.9% liked)
Technology
59589 readers
3148 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't see how you could realistically provide that guarantee.
I mean, you could create some kind of best-effort thing to make it more difficult, maybe.
If we knew how to make AI -- and this is going past just LLMs and stuff -- avoid doing hazardous things, we'd have solved the Friendly AI problem. Like, that's a good idea to work towards, maybe. But point is, we're not there.
Like, I'd be willing to see the state fund research on that problem, maybe. But I don't see how just mandating that models be conformant to that is going to be implementable.
Thats on the companies to figure out, tbh. "you cant say we arent allowed to build biological weapons, thats too hard" isn't what you're saying, but it's a hyperbolic example. The industry needs to figure out how to control the monster they've happily sent staggering towards the village, and really they're the only people with the knowledge to figure out how to stop it. If it's not possible, maybe we should restrict this tech until it is possible. LLMs aren't going to end the world, probably, but a protein sequencing AI that hallucinates while replicating a flu virus could be real bad for us as a species, to say nothing of the pearl clutching scenario of bad actors getting ahold of it.
There are many tools that might be used to create a biological weapon or something. You can use a pocket calculator for that. But we don't place bars on sale of pocket calculators to require proof be issued that nothing hazardous can be done with them. That is, this is a bar that is substantially higher than exists for any other tool.
Second, while I certainly think that there are legitimate existential risks, we are not looking at a near-term one. OpenAI or whoever isn't going to be producing something human-level any time soon. Like, Stable Diffusion, a tool used to generate images, would fall under this. It's very questionable that it, however, would be terribly useful in doing anything dangerous.
California putting a restriction like that in place, absent some kind of global restriction, won't stop development of models. It just ensures that it'll happen outside California. Like, it'll have a negative economic impact on California, maybe, but it's not going to have a globally-restrictive impact.
My concern is how short a hop it is from this to "won't someone please think of the children?" And then someone uses Stable Diffusion to create a baby in a sexy pose and it's all down in flames. IMO that sort of thing happens enough that pushing back against "gateway" legislation is reasonable.
I'd be concerned about its impact on the deployment of models too. Companies are not going to want to write software that they can't sell in California, or that might get them sued if someone takes it into California despite it not being sold there. Silicon Valley is in California, this isn't like it's Montana banning it.