this post was submitted on 27 Apr 2026
1131 points (98.6% liked)

Technology

84199 readers
3325 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Cherries@lemmy.world 3 points 11 hours ago (2 children)

An intern probably would not go on a mass deletion spree. Also, an intern doesn't eat a billion gpus.

[–] RalfWausE_der_zwote@feddit.org 1 points 7 hours ago

Regarding the first lets just say: I could tell stories...

Regarding the seccond: An AI does neither...

[–] Buddahriffic@lemmy.world 1 points 8 hours ago

It doesn't happen often, but there were horror stories like this before AI was a thing. Not just from interns, one that comes to mind was a guy running two terminals: one for the production db, one for the dev environment. He wanted to delete the dev db to start fresh again but accidentally ran the command in the production terminal.

Can't remember if that was the gitlabs one, but the gitlabs one also had issues where multiple backup options were never tested and none except the longest time period one worked (or maybe one did work but the initial command nuked that either directly or via mechanisms that "backed up" the deletion command).

Not that that makes these any less stupid. LLMs aren't genies that must follow the word of your orders to the letter. They are text prediction engines that use statistics from their training data to determine the most likely token that comes next. Any instructions you give it are just part of the context prior to the tokens it needs to predict. Any other part of the context could be determined to be more important or forgotten entirely. Especially by agents that are intended to work on their own, which might have conflicting instructions to ask before doing something dangerous while trying to do things without human input.

These frameworks like claude code help set up a good context for the LLMs to work in but it's not perfect (and might never be).