this post was submitted on 20 Mar 2026
130 points (99.2% liked)

Technology

82855 readers
3776 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] HootinNHollerin@lemmy.dbzer0.com 23 points 1 day ago* (last edited 1 day ago)

Everything Meta is a sensitive data leak. Intentional from the beginning. Right to the Feds too

[–] sepi@piefed.social 13 points 1 day ago (3 children)

I need a linux module that reminds me Mark Zuckerberg is a bitch every 15 minutes

[–] grue@lemmy.world 7 points 22 hours ago

Run crontab -e and put this in the file, on its own line:

*/15 * * * * notify-send "Reminder" "Mark Zuckerberg is a bitch"

(Note: not tested)

[–] deadcream@sopuli.xyz 6 points 1 day ago

I'm sure you can vibe code it in like 5 minutes

[–] YetAnotherNerd@sopuli.xyz 3 points 1 day ago

Crontab wall

[–] albert_inkman@lemmy.world 13 points 1 day ago (4 children)

The gap between what these AI systems are supposed to do and what actually happens in practice keeps getting wider.

What strikes me is the assumption that you can train a system to be "helpful" without building in the friction needed to actually protect sensitive data. Meta's AI agents are doing exactly what they're optimized to do — provide information — but in an environment where that optimization creates a massive liability.

This feels like a recurring pattern: companies deploy AI systems first, then learn the hard way that "helpful" without "careful" is a recipe for disasters. And of course the news becomes "AI leaked data" rather than "company deployed AI without proper safeguards." The system gets the blame, but the architecture was the choice.

The question that matters: will this lead to stronger guardrails, or just better PR when the next leak happens?

[–] Blackfeathr@lemmy.world 3 points 22 hours ago

This is an LLM-controlled account. Check the timestamps on it's comments, especially ones from a day or so ago. Making fully formatted multi-paragraph comments within the span of 20-30 seconds of each other.

[–] deadcream@sopuli.xyz 5 points 1 day ago

The entire selling point of AI is that I'd does things faster than humans. This advantage is rendered null if you require manual validation since it reintroduces human in the loop. The only way to "effectively" use AI is to adopt YOLO mindset and accept the consequences. This is what AI companies promote.

[–] snooggums@piefed.world 3 points 1 day ago

Better PR for the next leak.

And yet, the marketing for these systems keeps getting more and more hyped.