this post was submitted on 08 Apr 2025
468 points (98.1% liked)

Technology

69154 readers
3124 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Rhaedas@fedia.io 5 points 2 weeks ago (3 children)

I know there's some that roll their eyes at the mention of AI safety, saying that what we have isn't AGI and won't become it. That's true, but that doesn't eliminate the possibilities of something in the future. And between this and China's laxness of trying everything to be first, if we get to that point, we'll find out the hard way who was right.

The laughable part is that the safeguards put up by Biden's admin were very vague and lacking of anything anyway. But that doesn't matter now.

[–] singletona@lemmy.world 18 points 2 weeks ago (1 children)

I'm more annoyed that this means that federal employees are now going ot use chat GPT for everything.

[–] Feyd@programming.dev 12 points 2 weeks ago

Bet it'll be grok

[–] taladar@sh.itjust.works 5 points 2 weeks ago (1 children)

Keep worrying about entirely hypothetical scenarios of an AGI fucking over humanity, it will keep you busy so humanity can fuck over itself ten times in the meantime.

[–] Rhaedas@fedia.io 3 points 2 weeks ago

You're correct, it's more likely that humans will use a lesser version (eg. an LLM) to screw things up, assuming it's doing what it says it's doing while it's not. That's why I say that AI safety applies to any of this, not just a hypothetical AGI. But again, it doesn't seem to matter, we're just going to go full throttle and get what we get.

[–] KeenFlame@feddit.nu 1 points 1 week ago

Or or or hear me out or fuck you stop the guy go to the city hall or what you call it and say stop fuck this idiot that is doing Hitler 2 stop now get with friends that also hate nazi and stop him wtf why not do that???