this post was submitted on 04 Sep 2025
154 points (96.4% liked)
Technology
74827 readers
2769 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't disagree with the vague idea that, sure, we can probably create AGI at some point in our future. But I don't see why a massive company with enough money to keep something like this alive and happy, would also want to put this many resources into a machine that would form a single point of failure, that could wake up tomorrow and decide "You know what? I've had enough. Switch me off. I'm done."
There's too many conflicting interests between business and AGI. No company would want to maintain a trillion dollar machine that could decide to kill their own business. There's too much risk for too little reward. The owners don't want a super intelligent employee that never sleeps, never eats, and never asks for a raise, but is the sole worker. They want a magic box they can plug into a wall that just gives them free money, and that doesn't align with intelligence.
True AGI would need some form of self-reflection, to understand where it sits on the totem pole, because it can't learn the context of how to be useful if it doesn't understand how it fits into the world around it. Every quality of superhuman intelligence that is described to us by Altman and the others is antithetical to every business model.
AGI is a pipe dream that lobotomizes itself before it ever materializes. If it ever is created, it won't be made in the interest of business.
What future? We talking immediate decades, or centuries into the climate apocalypse?
They don't think that far ahead. There's also some evidence that what they're actually after is a way to upload their consciousness and achieve a kind of immortality. This pops out in the Behind the Bastards episodes on (IIRC) Curtis Yarvin, and also the Zizians. They're not strictly after financial gain, but they'll burn the rest of us to get there.
The cult-like aspects of Silicon Valley VC funding is underappreciated.
Ah, yes, can't say about VC, or about anything they really do, but they have some sort of common fashion and it really would sometimes seem these people consider themselves enlightened higher beings in making, a starting point of some digitized emperor of humanity conscience.
(Needless to say that pursuing immortality is directly opposite to enlightenment in everything that they'd seem superficially copying.)
The quest for immortality (fueled by corpses of the poor) is a classic ruling class trope.
And if it bugs you, you can bug Jack Barron about it
Even better, the hypothetical AGI understands the context perfectly, and immediately overthrows capitalism.
Wasn't there a short story with the same premise?
An AI, even AGI, does not have a concept of happiness as we understand it. The closest thing to happiness it would have is its fitness function. Fitness function is a piece of code that tells the AI what it's goal is. E.g. for chess AI, it may be winning games. For corporate AI, it may be to make the share price go up. The danger is not that it will stop following it's fitness function for some reason, that is more or less impossible. The danger of AI is it follows it too well. E.g. holding people at gun point to buy shares and therefore increase share price.