this post was submitted on 13 Feb 2026
161 points (98.8% liked)

Technology

81118 readers
4697 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 5 comments
sorted by: hot top controversial new old
[–] SaharaMaleikuhm@feddit.org 8 points 5 hours ago

Live by the slop, die by the slop.

[–] chisel@piefed.social 37 points 14 hours ago* (last edited 14 hours ago) (1 children)

Bad, sure, but also a well-known fundamental issue with basically all agentic LLMs, to the point where sounding the alarm as much as this seems silly. If you give an LLM any access to your machine, it might be tricked into taking malicious actions. The same can be said to real humans. Though they are better than LLMs right now, humans are the #1 security vulnerability in any system.

LLMs can get around this by requiring explicit permission before being able to take certain actions, but the more it asks for permission, the more the user has to just sit there and approve stuff, defeating the purpose of an autonomous agent in the first place.

To be clear, it's not good, and anyone running an agent with a high level of access with no oversight in a non-sandboxed environment today is a fool. But this article is written like they found an 11/10 severity CVE in the linux kernal when really this is a well-known fundamental thing that can happen if you're an idiot and misusing LLMs to a wild degree. LLMs have plenty of legitimate issues to trash on. This isn't one of them. Well, maybe a little.

[–] XLE@piefed.social 25 points 14 hours ago (1 children)

According to the article, the lack of sandboxing is intentional on Anthropic's part. Anthropic also fails to realistically communicate how to use their product.

This is Anthropic's fault.

[–] chisel@piefed.social 6 points 10 hours ago

Oh, for sure, the marketing is terrible and makes this into a bigger issue by making people over confident. I wouldn't say the lack of sandboxing is a major problem on its own, though. If you want an automated agent that does everything, it's going to need permissions to do everything. Though they should absolutely have configurable guardrails that are restrictive by default. I doubt they bothered with that.

The idea is sound, but the tech isn't there yet. The real problem is that the marketing pretends that LLMs are ready for this. Maybe Anthropic shouldn't have released it at all, but at this point AI companies subsist on releasing half-baked products with thrice-baked promises so at this point I wouldn't be surprised if OpenAI, in an attempt to remain relevant, tomorrow releases an automated identity theft bot to help you file your taxes incorrectly.

[–] Deestan@lemmy.world 6 points 13 hours ago

What is the phrase? Kicking in open doors?