this post was submitted on 27 Apr 2026
1084 points (98.7% liked)

Technology

84166 readers
2451 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] EncryptKeeper@lemmy.world 1 points 14 hours ago* (last edited 14 hours ago) (1 children)

Giving the equivalent of a junior dev with a learning disability the keys to the whole place is just dumb.

Correct. You too have now identified the AI problem. This was the job of a human senior infrastructure engineer that they delegated to an AI agent. They’ve found out why it’s not an AI’s job.

[–] luciferofastora@feddit.org 1 points 11 hours ago (1 children)

I can't read the original twitter link, but I'm not sure they handed it the job of a senior infrastructure engineer. The article says "routine", which to me is something you can hand off to a junior just fine. When they hit a snag, they obviously should stop and ask what to do, but even then, a human might want to avoid admitting ignorance and try to fix it themselves instead. They shouldn't have privileges to fuck up that badly.

So while it's on the AI for taking destructive steps, I do think there's a human error in the form of grossly irresponsible rights allotment. If this was a first-of-its-kind incident that shows otherwise stellar AI fucking up badly, I'd classify it as a pure AI problem, but their limits are hardly novel at this point. There have been previous incidents circulating the media. We've had memes about it. If you can't stay up to date on your tools and their shortcomings, you shouldn't be using them, because discovering a footgun becomes a question of "when", not "if".

That's why I consider this partially a human failing: If you're gonna use a tool, make sure that it operates within safe limits. The chainsaw doesn't know the difference between tree and bone, so it's on you to make sure it stays away from anyone's legs. So while "Chainsaw can saw legs if wielded improperly" is a problem that was accepted as a tradeoff for its utility, you can't really blame the chainsaw if you zip-tied the safety.

(Again, not to say Anthropic is blameless for letting its random generator generate randomly destructive shit. I just don't think that's the only point of failure here.)

[–] EncryptKeeper@lemmy.world 1 points 3 hours ago* (last edited 3 hours ago)

That's why I consider this partially a human failing: If you're gonna use a tool, make sure that it operates within safe limits.

Yes and in this case using it for this job at all was clearly not within safe limits. You keep hammering on “It’s not the AI’s fault it was given a job with too big of a blast zone for it to safely do” after I’ve said “This type of job has too big a blast zone for an AI to safely do” and somehow you’ve convinced yourself that these are two different things.