this post was submitted on 08 Jun 2024
361 points (97.9% liked)

Technology

59495 readers
3041 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] tal@lemmy.today 37 points 5 months ago* (last edited 5 months ago) (13 children)

The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

I don't see how you could realistically provide that guarantee.

I mean, you could create some kind of best-effort thing to make it more difficult, maybe.

If we knew how to make AI -- and this is going past just LLMs and stuff -- avoid doing hazardous things, we'd have solved the Friendly AI problem. Like, that's a good idea to work towards, maybe. But point is, we're not there.

Like, I'd be willing to see the state fund research on that problem, maybe. But I don't see how just mandating that models be conformant to that is going to be implementable.

[–] joewilliams007@kbin.melroy.org 1 points 5 months ago

you can guarantee it, by feeding it only information without weapon information. The information they use, is just scraping every single piece of data from the internet.

load more comments (12 replies)