this post was submitted on 18 Apr 2026
71 points (80.3% liked)

Technology

83893 readers
3825 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 23 comments
sorted by: hot top controversial new old
[–] deathbird@mander.xyz 11 points 5 hours ago

"Oh no what if someone believes my hype about building a Torment Nexus and, instead of throwing more money on my money fire, tries setting me on fire instead."

[–] Aatube@lemmy.dbzer0.com 53 points 9 hours ago (5 children)

Have the comments here read the article? It's arguing that the CEOs themselves have spread the doomer narrative and are now being molotov'd as a result. The subject of the title is/includes Altman, hence the Altman cover photo. This was way way better than I expected of Gizmodo (bravo Gizmodo), warning us that execs are only toning down their AI dooming for self-protection.

Whatever happens, it feels like the AI executives have painted themselves into a corner. They’ve told everyone their product has the potential to destroy everything. They were the doomers, if we want to call it that, at least when it was convenient. And now we seem to be entering a different era where the same people who told us about the dangers of AI try to get us to look exclusively at what they claim are enormous benefits for society; so far, with little to show.

@gravitas_deficiency@sh.itjust.works @Sundray@lemmus.org

[–] Iconoclast@feddit.uk 3 points 3 hours ago* (last edited 3 hours ago)

Have the comments here read the article?

You serious? Ofcourse not - but they did see the letters "AI" in the title.

[–] EvergreenGuru@lemmy.world 16 points 9 hours ago

They should’ve chosen a lane. OpenAI was about free LLMs, then they went LLC and decided that AI could make money. It doesn’t make money though, so now we’re watching the idiots realize they have burned all this money investing it into AI.

All the experts told us it couldn’t do any of the things sci-fi writers love to write stories about. Nothing changed except perception, and with by directing perception they managed to use an old technology to temporarily buttress the economy.

[–] chemical_cutthroat@lemmy.world 4 points 9 hours ago

Lol, I'm not sure what's worse. Using an LLM to summarize and article for you, or not even reading the article and assuming you know the contents by the title. Fucking people...

[–] XLE@piefed.social 0 points 6 hours ago

It's an understandable conclusion if you only read the title of the article. Surely an AI doomer is someone that thinks it's garbage, right?

But if people familiarize themselves with what professional AI doomers look like, and what AI safety groups look like, it becomes abundantly clear that they are all pro-industry. They will only ever criticize AI in ways that covertly praise it on its non-existent capacity.

[–] sundray@lemmus.org 1 points 8 hours ago (1 children)

I did. As well written as it is, I don't think the premise of "the REAL doomers were the CEOs!" is going to spread far enough to dethrone the present, much more popular understanding of what an AI doomer is. It didn't seem worth addressing. We'll see though; perhaps every time someone says "AI doomer" on Lemmy, some wag will reply with, "Um a-kually, I think you'll find the tech CEOs are the real doomers, LOL."

As to the the notion that the dangers these techbros have released are now coming home to roost: it's overstated. In my opinion, the techbros will continue not to give the merest shit about the harms they've caused, and one misguided soul with a molly isn't going to change that -- or bring back all the dead people LLMs contributed to killing. Will it increase the CEO's feelings of paranoia? My dude, the wealthy are already maximally paranoid.

[–] Aatube@lemmy.dbzer0.com 5 points 8 hours ago* (last edited 8 hours ago) (1 children)

interesting. I don't think the article is saying "the real doomers are the CEOs", though. what you've written in the second paragraph (and just that is incredibly interesting even if it doesn't have the impact you've outlined. it's incredibly Greek) is fully compatible with agreeing that AI is doomish. I'll also repeat my point that the article advises increased caution more than before of tech's claiming of great AI net benefits.

[–] sundray@lemmus.org 1 points 8 hours ago

Completely fair, and I definitely agree with your point.

[–] sundray@lemmus.org 30 points 10 hours ago
[–] gravitas_deficiency@sh.itjust.works 29 points 10 hours ago (1 children)

Fuck you, Gizmodo, and fuck off. There are consequences when you break the societal contract. This is that.

[–] terabyterex@lemmy.world 4 points 7 hours ago* (last edited 7 hours ago) (1 children)

You love sam altman that much that you have an emotional response to gizmodo giving very valid criticism of him? I'm sorry gizmodo is right and altman is a tool. Please dont worship a man.

[–] atrielienz@lemmy.world 5 points 7 hours ago

I don't think them saying this has much to do with liking Altman. Rather, I think they are raging at Gizmodo (because well, Gizmodo) and also at the headline of an article they didn't read.

[–] Iconoclast@feddit.uk -1 points 3 hours ago (4 children)

The way I see it:

  • AGI is inevitable given enough time, assuming we don't destroy ourselves some other way first.
  • It has the capacity to solve literally all our problems and make life on Earth as close to utopia as possible.
  • That same capacity, however, also enables it to end the human race - either intentionally or as a byproduct of misalignment.
  • If the "West" doesn't build it first, then China will. There's no second place in this race.
  • Even if all nation-states somehow agreed to stop its development, a rogue underground group would do it - or possibly some random dude in his mom's basement.

I genuinely see no solution to this. I can only hope things turn out well, or at the very least that it doesn't happen during my lifetime. The genie isn't going back into the bottle.

[–] Lydon_Feen@lemmy.world 4 points 1 hour ago

"It has the capacity to solve literally all our problems and make life on Earth as close to utopia as possible."

Sure... If it wasn't in the hands of people who's main purpose is to gather more money, resources and power.

It won't solve all our problems. It will solve theirs.

[–] Simulation6@sopuli.xyz 2 points 1 hour ago (1 children)

AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable. The current models may grow sophisticated enough that it is hard to distinguish them from AGI, but will still be LLMs.
I see the current AI bubble as a bunch of guys digging a hole, realizing they can't get out and deciding the only way out is to keep digging.

[–] Iconoclast@feddit.uk 3 points 43 minutes ago* (last edited 41 minutes ago)

AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable.

Plenty of AI systems have already been developed by private individuals on their personal computers. This is not hypothetical. And I'm not claiming that our first AGI will have anything to do with LLMs.

I view AGI as inevitable because it's the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn't matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don't think there's anything supernatural about intelligence.

[–] IratePirate@feddit.org 3 points 2 hours ago* (last edited 2 hours ago) (1 children)

Good work, citizen! The tech bros need you to believe that their dumb digital parrots will eventually, magically metamorphose into AGI. It's the only thing that keeps that sweet VC money flowing and the AI bubble from popping.

[–] Iconoclast@feddit.uk 1 points 2 hours ago* (last edited 2 hours ago)

I'm just going to ignore your completely uncalled-for smug and dismissive tone and note that at no point have I suggested LLMs will lead to AGI.

Thank you for your contribution to making this platform a worse place for everyone.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 1 hour ago (1 children)

AGI is inevitable given enough time

Is it inevitable with 500 years though?

[–] Iconoclast@feddit.uk 3 points 1 hour ago

Nobody could possibly know. That's why I make no claims about the timeline.

[–] pelespirit@sh.itjust.works 4 points 8 hours ago

But it’s hard to take that argument seriously after everything guys like Altman have been saying. It didn’t even start as late as 2022, either. Back in 2015, Altman said, “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.”