this post was submitted on 28 Feb 2026
2048 points (99.2% liked)

Technology

83069 readers
3415 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] panda_abyss@lemmy.ca 236 points 3 weeks ago (3 children)

They went from standing with Anthropic to throwing them under the bus real fast

[–] floofloof@lemmy.ca 223 points 3 weeks ago* (last edited 3 weeks ago) (1 children)
[–] timestatic@feddit.org 29 points 3 weeks ago (2 children)

They probably have been working on a potential agreement with openai for a while now. They just hastily finished it in response to anthropic. But I don't know if they will keep the red lines anthropic has demanded in place

[–] Furbag@lemmy.world 20 points 3 weeks ago

They won't.

load more comments (1 replies)
[–] tacosanonymous@mander.xyz 78 points 3 weeks ago (1 children)
[–] porous_grey_matter@lemmy.ml 58 points 3 weeks ago (1 children)

Which they badly need, they are in an incredibly risky position right now. It's very disappointing, this deal might save them from collapse for quite a while.

[–] WhatAmLemmy@lemmy.world 41 points 3 weeks ago (2 children)

The only disappointment, is that Altmans head is still attached to his shoulders.

[–] porous_grey_matter@lemmy.ml 17 points 3 weeks ago

No, no, even if we get that wish I dont want the US state propping up AI longer

[–] defaultusername@lemmy.dbzer0.com 16 points 3 weeks ago (3 children)

Altman is a symptom, not the problem. The problem is capitalism.

load more comments (3 replies)
load more comments (1 replies)
[–] perishthethought@piefed.social 181 points 3 weeks ago (3 children)

mainstream

I'll believe that when my sisters start saying this. Till then, it's just us privacy fans screaming in a dark cave, enjoying the echo.

[–] Xorg_Broke_Again@sh.itjust.works 100 points 3 weeks ago (2 children)

It's always like this. We get a ton of articles on how everyone is suddenly boycotting/deleting [insert thing] but when you ask someone in real life, they usually have no idea what you're talking about.

[–] Quill7513@slrpnk.net 30 points 3 weeks ago (3 children)

so explain it to them gently. you won't reach everyone, but you'll reach more people than accepting this status quo

load more comments (3 replies)
load more comments (1 replies)
load more comments (2 replies)
[–] raskal@sh.itjust.works 106 points 3 weeks ago (5 children)

Canada recently has had its 2nd worst school shooting ever. The killer had many interactions with ChatGPT that warranted banning her account. A whistleblower has claimed that they wanted to inform Canada's police force of these comments but were denied by ChatGPT's management.

They had a chance to stop the death of 8 people, most of which were young children, but failed to do anything.

FUCK CHATGPT AND THOSE BASTARDS THAT RUN IT

load more comments (5 replies)
[–] Zedstrian@sopuli.xyz 105 points 3 weeks ago (9 children)

Windows Central shouldn't be parroting the U.S. government in mislabeling the Department of Defense.

[–] ThePantser@sh.itjust.works 87 points 3 weeks ago (1 children)

I mean it's at least accurate now, there is no defense when you are starting war with everyone

load more comments (1 replies)
[–] lmdnw@lemmy.world 17 points 3 weeks ago

Especially since the Trump admin already made it clear that they don’t respect preferred pronouns. Why should we use the DoD’s preferred pronouns of Department of War instead of the Department of Defense name it legally has? DoW is just DoD’s preferred pronoun.

load more comments (7 replies)
[–] lmdnw@lemmy.world 93 points 3 weeks ago (2 children)

Sam Altman is objectively a bad human being.

[–] ChaoticEntropy@feddit.uk 42 points 3 weeks ago (3 children)

Sam Altman is just some fail upward money guy, he's been eventually removed from basically every prior position he has held.

[–] PolarKraken@lemmy.dbzer0.com 23 points 3 weeks ago

Seems like his career has largely been lying and making impossible promises, so. The folks who do that well always manage to exit the stage before the magic tincture is revealed to just be piss 🤷‍♂️

load more comments (2 replies)
load more comments (1 replies)
[–] cloudskater@piefed.blahaj.zone 81 points 3 weeks ago

I cannot believe this is what it took for a boycott to go more mainstream. Tell me more about how so many people have no respect for the environment or the artists who's work they gleefully consume.

[–] theuniqueone@lemmy.dbzer0.com 71 points 3 weeks ago (1 children)

Anthropic still is scum for being completely fine helping America oppress the rest of the world.

[–] XLE@piefed.social 17 points 3 weeks ago* (last edited 3 weeks ago) (24 children)

Anthropic is scum, accepting money from foreign dictators, forcing their software on minorities while insisting it was conscious and had emotions just like them, praising the Trump administration, making up scary stories to get more funding...

...In many ways, they're worse than OpenAI. They're just running with the same playbook that Sam Altman used to use to pretend he was a good guy.

load more comments (24 replies)
[–] humanspiral@lemmy.ca 55 points 3 weeks ago (1 children)

Use for "all lawful means" is quite the grey area considering no one was arrested or fired, or any law updated, for what Snowden leaked. If the NSA does it, no one will arrest the NSA.

load more comments (1 replies)
[–] David_Eight@lemmy.world 52 points 3 weeks ago (11 children)

The Department of War isn't a real thing. Its called The Department of Defense. That's not my opinion either, its officially/legally called The Department of Defense.

[–] Miaou@jlai.lu 17 points 3 weeks ago

Department of War is more apt, however

load more comments (10 replies)
[–] FalschgeldFurkan@lemmy.world 52 points 3 weeks ago (1 children)

"You're absolutely right! That was a children's hospital, not a military base. Let's try that again!"

load more comments (1 replies)
[–] CanadianMade@lemmy.ca 50 points 3 weeks ago (2 children)
load more comments (2 replies)
[–] InternetPerson@lemmings.world 50 points 3 weeks ago (21 children)

You should also stop using Google products for similar reasons.

load more comments (21 replies)
[–] pelespirit@sh.itjust.works 36 points 3 weeks ago (4 children)

After Anthropic refused flat out to agree to apply Claude AI to autonomous weapons and mass surveillance of American citizens, OpenAI jumps right into bed with the United States Department of War.

I think people are a little bit missing the important bit. This government wants to send out autonomous weapons along with mass surveillance. They'll just murder anyone they want, if the AI gets it right in the first place.

Here we are in Running Man and no one sees it coming. This is why Stephen King is so against this administration. He predicted it.

load more comments (4 replies)
[–] floofloof@lemmy.ca 34 points 3 weeks ago (6 children)

I can't believe people were paying for it in the first place.

load more comments (6 replies)
[–] turdburglar@piefed.social 25 points 3 weeks ago (1 children)

nice headline, but wtf is windows central?

[–] zikzak025@lemmy.world 34 points 3 weeks ago (1 children)

A Microsoft-oriented news outlet.

Think similar to MacRumors/9to5Mac/AppleInsider for Apple.

[–] supersquirrel@sopuli.xyz 19 points 3 weeks ago (1 children)

There are so many levels to hell I haven't even heard of.

load more comments (1 replies)
[–] pineapplelover@lemmy.dbzer0.com 24 points 3 weeks ago

Dude the only guardrails are

  1. No fully automated killings

  2. No mass surveillance

You could literally do anything else, you could automate killing people with a person approving.

Trump booted anthropic because they couldn't lift these two guardrails. Fuck me

[–] boogiebored@lemmy.world 19 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

So many companies are cozying up to the fascist regime as this is the late stage of capitalism.

A list of some of these companies: https://x.com/vxunderground/status/2024200204296061089?s=20

[–] Burghler@sh.itjust.works 20 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

List is hosted on a facism aligned owner's misinformation site

Ok

load more comments (1 replies)
load more comments (4 replies)
[–] ScoffingLizard@lemmy.dbzer0.com 18 points 3 weeks ago

I am canceling my subscription now. Fuckers.

[–] I_Has_A_Hat@lemmy.world 18 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

Yea, I can just imagine OpenAI is really struggling with their business decision.

On the one hand, they have multi-billion dollar contracts with the US Military that will make them all fabulously wealthy beyond their wildest dreams.

On the other, they have a handful of individuals leaving that might amount to a few thousand dollars of lost revenue.

Gosh, it must sure have been a tough choice.

load more comments (3 replies)
[–] SpiceDealer@lemmy.dbzer0.com 18 points 3 weeks ago (1 children)

I'd argue that an armed uprising would have a greater effect than a smaller internet-based boycott but I'm just some random guy on some niche internet forum so... who's to say?

load more comments (1 replies)
[–] JigglypuffSeenFromAbove@lemmy.world 18 points 3 weeks ago (2 children)

From OpenAI's statement:

We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:

• No use of OpenAI technology for mass domestic surveillance.

• No use of OpenAI technology to direct autonomous weapons systems.

• No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).

It specifically states their AI can't/won't be used for surveillance and autonomous weapons. Of course I'm not saying I trust them, but isn't this the same thing Anthropic says they're against? What's the difference here or what did I miss?

[–] muusemuuse@sh.itjust.works 20 points 3 weeks ago (1 children)

Anthropic put clauses in that were legally enforceable by future administrations. OpenAI says “yea we totally trust you bro”

load more comments (1 replies)
load more comments (1 replies)
[–] pnelego@lemmy.world 17 points 3 weeks ago (1 children)

I’m wondering if this is a play for a future bailout. OpenAI knows they are fucked; and instead of just going away like most companies do when they fail, they are embedding themselves in the government to secure a bailout under the guise of a critical defence vendor.

Furthermore, I’m not convinced the researchers and critical personnel will work for a company that does this. I think we’re about to see the biggest jumping of a ship so far in the industry.

load more comments (1 replies)
[–] trackball_fetish@lemmy.wtf 16 points 3 weeks ago (11 children)

Anyone stockpiling ai prompt vulnerabilities for when we'll eventually need them to fight off some deathbots?

load more comments (11 replies)
[–] glitchdx@lemmy.world 16 points 3 weeks ago

Glad that I've switched platforms. sam altman should probably be in prison or something.

I've been using Venice lately, they claim (I have done zero research to determine if this is true) that they're privacy focused. They do run uncensored models, which is a big plus.

That said, I find myself using the lying machine less these days. It was like a fun video game when I first got my hands on it, entertaining for a while, and I'm moving on. Maybe I'm not imaginative enough to use it to the fullest potential, but I'm having more fulfillment actually writing and actually drawing (even though I am very bad at both).

load more comments
view more: next ›