this post was submitted on 28 Feb 2026
1537 points (99.5% liked)

Technology

82000 readers
2992 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] theuniqueone@lemmy.dbzer0.com 64 points 15 hours ago (1 children)

Anthropic still is scum for being completely fine helping America oppress the rest of the world.

[–] XLE@piefed.social 10 points 11 hours ago* (last edited 11 hours ago) (2 children)

Anthropic is scum, accepting money from foreign dictators, forcing their software on minorities while insisting it was conscious and had emotions just like them, praising the Trump administration, making up scary stories to get more funding...

...In many ways, they're worse than OpenAI. They're just running with the same playbook that Sam Altman used to use to pretend he was a good guy.

[–] Vlyn@lemmy.zip 2 points 9 hours ago (1 children)

I mean they praised the Trump administration for benefiting their business, which is.. fair? I guess?

If you do ask Claude Sonnet 4.6 about Trump it leans quite negative, as it should.

[–] XLE@piefed.social 1 points 8 hours ago

I missed when sucking up to the Trump administration and echoing Cold War style nationalism was "fair". If that's the case, OpenAI's behavior is fair.

Fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems.

Our strong preference is to continue to serve the Department and our warfighters

Dario "Warfighter" Amodei

[–] Hackworth@piefed.ca 2 points 11 hours ago (1 children)

They insisted Claude was human?

[–] XLE@piefed.social 6 points 11 hours ago (1 children)

Sorry, not quite, but close. From 404 media

When users confronted Clinton with their concerns, he brushed them off, said he would not submit to mob rule, and explained that AIs have emotions and that tech firms were working to create a new form of sentience, according to Discord logs and conversations with members of the group.

[–] Hackworth@piefed.ca 3 points 11 hours ago (1 children)

Oh, that guy! To be fair, that's one employee, not Anthropic's actions or position. You mentioned forcing their software on minorities while insisting it was better than it was, and I was getting OLPC flashbacks. But Anthropic looking for funding in the UAE and Qatar is shitty. I can't seem to find anything about whether or not they went through with those contracts.

[–] XLE@piefed.social 6 points 11 hours ago* (last edited 11 hours ago) (1 children)

Jason Clinton is Anthropic’s Deputy Chief Information Security Officer. That means Jason knew better, and he was using his position as a moderator (and supposedly a security expert) to try gaslighting a vulnerable minority into believing his favorite toy was "secure" when it was not.

[–] Hackworth@piefed.ca 3 points 10 hours ago (1 children)

I mean, I'm not gonna defend him. But fucking up a discord that you're a mod of isn't really in the same ballpark as taking money from dictators or directing fully autonomous strikes. Also, from the read, it really sounds like that Deputy CISO was a prime example of cyber-psychosis, or AI mania, or whatever we've decided to call it. And I assume he is part of the same vulnerable minority?

[–] XLE@piefed.social 2 points 10 hours ago* (last edited 10 hours ago) (1 children)

Every example we have of Anthropic's behavior paints a picture of an immoral company that pretends to be moral. It's bad enough that they continue doing harm, but then they dress it up with phrases like "AI Safety" and "Information Security". (And every press release they create to describe how scary good their system is, tends to be followed up by a sudden cash infusion from an openly morally bankrupt company like Google or Amazon.)

I reserve zero empathy for the people on the abuser side of an abusive dynamic. Maybe Elon Musk is autistic too. I don't really care. Only Moloch knows their hearts. I'll judge them for their actions.

[–] Hackworth@piefed.ca 3 points 10 hours ago (1 children)

I did find an update on that funding, btw. Anthropic already took money from Qatar (the QIA), but the amount isn't known - likely around $100M. The UAE has yet to happen, but if does, it would be "hundreds of millions".

[–] XLE@piefed.social 2 points 9 hours ago

Interesting. I appreciate you doing the digging to check. It's frustrating that people spent so much time looking at the fact that Anthropic had an uncrossed red line, they didn't look at all the red lines that were already crossed - in the very article about those supposed red lines. Such is PR I guess.

I suppose you saw that "He Will Not Divide Us 2.0" letter from OpenAI and Google employees who promised to stand behind Anthropic. Never mind the fact OpenAI split.... Doesn't anybody know Google already does mass surveillance of Americans?

...I ramble.