this post was submitted on 08 Jun 2024
361 points (97.9% liked)

Technology

59589 readers
3148 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] tal@lemmy.today 37 points 5 months ago* (last edited 5 months ago) (13 children)

The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

I don't see how you could realistically provide that guarantee.

I mean, you could create some kind of best-effort thing to make it more difficult, maybe.

If we knew how to make AI -- and this is going past just LLMs and stuff -- avoid doing hazardous things, we'd have solved the Friendly AI problem. Like, that's a good idea to work towards, maybe. But point is, we're not there.

Like, I'd be willing to see the state fund research on that problem, maybe. But I don't see how just mandating that models be conformant to that is going to be implementable.

[–] Warl0k3@lemmy.world 27 points 5 months ago* (last edited 5 months ago) (11 children)

Thats on the companies to figure out, tbh. "you cant say we arent allowed to build biological weapons, thats too hard" isn't what you're saying, but it's a hyperbolic example. The industry needs to figure out how to control the monster they've happily sent staggering towards the village, and really they're the only people with the knowledge to figure out how to stop it. If it's not possible, maybe we should restrict this tech until it is possible. LLMs aren't going to end the world, probably, but a protein sequencing AI that hallucinates while replicating a flu virus could be real bad for us as a species, to say nothing of the pearl clutching scenario of bad actors getting ahold of it.

[–] General_Effort@lemmy.world 1 points 5 months ago (3 children)

Haven't these guys read Frankenstein? Everyone knows Monsters are bad.

[–] FaceDeer@fedia.io 2 points 5 months ago (1 children)

Indeed. If only Frankenstein's Monster had been shunned nothing bad would have happened.

[–] Warl0k3@lemmy.world 1 points 5 months ago* (last edited 5 months ago)

You two may not be giving me enough credit for my choice of metaphors here.

load more comments (1 replies)
load more comments (8 replies)
load more comments (9 replies)