this post was submitted on 05 Mar 2026
829 points (98.1% liked)

Technology

82329 readers
4371 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Septimaeus@infosec.pub 7 points 21 hours ago* (last edited 18 hours ago) (1 children)

Edit-pre: To be clear…I use LLMs rarely (personal reasons) and never for certain things like writing and math (professional reasons) but this comment is not an “AI good/bad” take, just a practical question of tool safety/regs.

AI including LLMs are forevermore just tools in my mind. And we wouldn’t have OSHA/BMAS/HSE/etc if idiots didn’t do idiot things with tools.

But there’s evidently a certain type of idiot that’s spared from their idiocy only by lack of permission.From who? Depends.

Sometimes they need permission from authority: “god told me to!”

Sometimes they need it from the mob: “I thought I was on a tour!”

And sometimes any fucking body will do: “dare me to do it!”

But all these stories of nutters doing shit AI convinced them to do, from the comical to the deeply tragic, ring the same bonkers bell they always have.

But therein lies the danger unique^1^ to these tools: that they mimic a permission-giver better than any we’ve made.They’re tailor-made for activating this specific category of idiot, and their likely unparalleled ease-of-use absolutely scales that danger.

As to whether these idiots wouldn’t have just found permission elsewhere, who knows.

My question is whether some kind of training prereq is warranted for LLM usage, as is common with potentially dangerous tools? Is that too extreme? Is it too late for that? Am I overthinking it?

^1^Edit-post: unique danger, not greatest.Rant/

What is the greatest danger then? IMHO settling for brittle “guard rails” then bulldozing ahead instead of laying groundwork of real machine-ethics.

Hoping conscience is an emergent property of the organic training set is utterly facile, theoretically and empirically. Engineers should know better.

Why is it greatest? Easy. Because some of history’s most important decisions were made by a person whose conscience countermanded their orders. Replacing empathic agents with machines eliminates those safeguards.

So “existential threat” and that’s even before considering climate. /Rant

[–] Regrettable_incident@lemmy.world 5 points 21 hours ago (2 children)

The LLM just told me to come round to your house and crap in your begonias. You might want to avoid looking out the window until I'm done.

[–] WhyJiffie@sh.itjust.works 1 points 12 hours ago

that sounds like a regrettable incident

[–] Septimaeus@infosec.pub 4 points 19 hours ago

lol and with that you’re a better friend to the begonia’s than I