this post was submitted on 04 Sep 2025
143 points (96.7% liked)

Technology

74827 readers
3756 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://programming.dev/post/36866515

Comments

(page 2) 49 comments
sorted by: hot top controversial new old
[–] Perspectivist@feddit.uk 45 points 1 day ago (19 children)

I can think of only two ways that we don't reach AGI eventually.

  1. General intelligence is substrate dependent, meaning that it's inherently tied to biological wetware and cannot be replicated in silicon.

  2. We destroy ourselves before we get there.

Other than that, we'll keep incrementally improving our technology and we'll get there eventually. Might take us 5 years or 200 but it's coming.

[–] FaceDeer@fedia.io 28 points 1 day ago (3 children)

If it's substrate dependent then that just means we'll build new kinds of hardware that includes whatever mysterious function biological wetware is performing.

Discovering that this is indeed required would involve some world-shaking discoveries about information theory, though, that are not currently in line with what's thought to be true. And yes, I'm aware of Roger Penrose's theories about non-computability and microtubules and whatnot. I attended a lecture he gave on the subject once. I get the vibe of Nobel disease from his work in that field, frankly.

If it really turns out to be the case though, microtubules can be laid out on a chip.

[–] phutatorius@lemmy.zip 2 points 1 day ago

Penrose has always had a fertile imagination, and not all his hypotheses have panned out. But he does have the gift that, even when wrong, he's generally interestingly wrong.

[–] pilferjinx@piefed.social 3 points 1 day ago

Imagine that we just end up creating humans the hard, and less fun, way.

[–] panda_abyss@lemmy.ca 6 points 1 day ago

I could see us gluing third world fetuses to chips and saying not to question it before reproducing it.

[–] Cethin@lemmy.zip 2 points 1 day ago (1 children)

For 1, we can grow neurons and use them for computation, so not actually an issue if it were true (which it almost certainly isn't because it isn't magic).

https://youtu.be/bEXefdbQDjw

load more comments (1 replies)
[–] RedPandaRaider@feddit.org 12 points 1 day ago (1 children)
  1. Ist getting likelier by the decade.
[–] Chozo@fedia.io 6 points 1 day ago (1 children)

General intelligence is substrate dependent, meaning that it's inherently tied to biological wetware and cannot be replicated in silicon.

We're already growing meat in labs. I honestly don't think lab-grown brains are as far off as people are expecting.

[–] wirehead@lemmy.world 4 points 1 day ago (1 children)

Well, think about it this way...

You could hit AGI by fastidiously simulating the biological wetware.

Except that each atom in the wetware is going to require n atoms worth of silicon to simulate. Simulating 10^26 atoms or so seems like a very very large computer, maybe planet-sized? It's beyond the amount of memory you can address with 64 bit pointers.

General computer research (e.g. smaller feature size) reduces n, but eventually we reach the physical limits of computing. We might be getting uncomfortably close right now, barring fundamental developments in physics or electronics.

The goal if AGI research is to give you a better improvement of n than mere hardware improvements. My personal concern is that that LLM's are actually getting us much of an improvement on the AGI value of n. Likewise, LLM's are still many order of magnitude less parameters than the human brain simulation so many of the advantages that let us train a singular LLM model might not hold for an AGI model.

Coming up with an AGI system that uses most of the energy and data center space of a continent that manages to be about as smart as a very dumb human or maybe even just a smart monkey is an achievement in AGI but doesn't really get you anywhere compared to the competition that is accidentally making another human amidst a drunken one-night stand and feeding them an infinitesimal equivalent to the energy and data center space of a continent.

[–] frezik@lemmy.blahaj.zone 3 points 1 day ago (1 children)

I see this line of thinking as more useful as a thought experiment than as something we should actually do. Yes, we can theoretically map out a human brain and simulate it in extremely high detail. That's probably both inefficient and unnecessary. What it does do is get us past the idea that it's impossible to make a computer that can think like a human. Without relying on some kind of supernatural soul, there must be some theoretical way we could do this. We just need to know how without simulating individual atoms.

[–] kkj@lemmy.dbzer0.com 1 points 23 hours ago

It might be helpful to make one full brain simulation, so that we can start removing parts and seeing what needs to stay. I definitely don't think that we should be mass-producing then, though.

[–] realitista@piefed.world 2 points 1 day ago

Well it could also just depend on some mechanism that we haven't discovered yet. Even if we could technically reproduce it, we don't understand it and haven't managed to just stumble into it and may not for a very long time.

[–] panda_abyss@lemmy.ca 2 points 1 day ago

I don’t think our current LLM approach is it, but I doing think intelligence is unique to humans at all.

[–] ExLisper@lemmy.curiana.net 0 points 1 day ago (1 children)

You're talking about consciousness, not AGI. We will never be able to tell if AI has "real" consciousness or not. The goal is really to create an AI that acts intelligent enough to convince people that it may be conscious.

Basically, we will "hit" AGI when enough people will start treating it like it's AGI, not when we achieve some magical technological breakthrough and say "this is AGI".

[–] Perspectivist@feddit.uk 1 points 1 day ago (1 children)

Same argument applies for consciousness as well, but I'm talking about general intelligence now.

[–] ExLisper@lemmy.curiana.net 2 points 1 day ago* (last edited 1 day ago)

I don't think you can define AGI in a way that would make it substrate dependent. It's simply about behaving in a certain way. Sufficiently complex set of 'if -> then' statements could pass as AGI. The limitation is computation power and practicality of creating the rules. We already have supercomputers that could easily emulate AGI but we don't have a practical way of writing all the 'if -> then' rules and I don't see how creating the rules could be substrate dependent.

Edit: Actually, I don't know if current supercomputers could process input fast enough to pass as AGI but it's still about computation power, not substrate. There's nothing suggesting we will not be able to keep increasing computational power without some biological substrate.

load more comments (11 replies)
[–] L7HM77@sh.itjust.works 23 points 1 day ago (5 children)

I don't disagree with the vague idea that, sure, we can probably create AGI at some point in our future. But I don't see why a massive company with enough money to keep something like this alive and happy, would also want to put this many resources into a machine that would form a single point of failure, that could wake up tomorrow and decide "You know what? I've had enough. Switch me off. I'm done."

There's too many conflicting interests between business and AGI. No company would want to maintain a trillion dollar machine that could decide to kill their own business. There's too much risk for too little reward. The owners don't want a super intelligent employee that never sleeps, never eats, and never asks for a raise, but is the sole worker. They want a magic box they can plug into a wall that just gives them free money, and that doesn't align with intelligence.

True AGI would need some form of self-reflection, to understand where it sits on the totem pole, because it can't learn the context of how to be useful if it doesn't understand how it fits into the world around it. Every quality of superhuman intelligence that is described to us by Altman and the others is antithetical to every business model.

AGI is a pipe dream that lobotomizes itself before it ever materializes. If it ever is created, it won't be made in the interest of business.

[–] frezik@lemmy.blahaj.zone 13 points 1 day ago (2 children)

They don't think that far ahead. There's also some evidence that what they're actually after is a way to upload their consciousness and achieve a kind of immortality. This pops out in the Behind the Bastards episodes on (IIRC) Curtis Yarvin, and also the Zizians. They're not strictly after financial gain, but they'll burn the rest of us to get there.

The cult-like aspects of Silicon Valley VC funding is underappreciated.

[–] vacuumflower@lemmy.sdf.org 1 points 23 hours ago* (last edited 23 hours ago)

Ah, yes, can't say about VC, or about anything they really do, but they have some sort of common fashion and it really would sometimes seem these people consider themselves enlightened higher beings in making, a starting point of some digitized emperor of humanity conscience.

(Needless to say that pursuing immortality is directly opposite to enlightenment in everything that they'd seem superficially copying.)

[–] brucethemoose@lemmy.world 5 points 1 day ago (1 children)

The quest for immortality (fueled by corpses of the poor) is a classic ruling class trope.

[–] Zos_Kia@lemmynsfw.com 1 points 1 day ago* (last edited 1 day ago)

And if it bugs you, you can bug Jack Barron about it

[–] phutatorius@lemmy.zip 2 points 1 day ago

Even better, the hypothetical AGI understands the context perfectly, and immediately overthrows capitalism.

[–] TheBat@lemmy.world 2 points 1 day ago

a machine that would form a single point of failure, that could wake up tomorrow and decide "You know what? I've had enough. Switch me off. I'm done."

Wasn't there a short story with the same premise?

[–] DreamlandLividity@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

keep something like this alive and happy

An AI, even AGI, does not have a concept of happiness as we understand it. The closest thing to happiness it would have is its fitness function. Fitness function is a piece of code that tells the AI what it's goal is. E.g. for chess AI, it may be winning games. For corporate AI, it may be to make the share price go up. The danger is not that it will stop following it's fitness function for some reason, that is more or less impossible. The danger of AI is it follows it too well. E.g. holding people at gun point to buy shares and therefore increase share price.

load more comments (1 replies)
[–] technocrit@lemmy.dbzer0.com 10 points 1 day ago (6 children)

Spoiler: There's no "AI". Forget about "AGI" lmao.

[–] Perspectivist@feddit.uk 21 points 1 day ago (1 children)

That's just false. The chess opponent on Atari qualifies as AI.

[–] phutatorius@lemmy.zip 2 points 1 day ago (1 children)

Then a trivial table lookup that plays optimal Tic Tac Toe is also AI.

[–] Perspectivist@feddit.uk 2 points 1 day ago

Not really the same thing. The Tic Tac Toe brute force is just a lookup - every possible state is pre-solved and the program just spits back the stored move. There’s no reasoning or decision-making happening. Atari Chess, on the other hand, couldn’t possibly store all chess positions, so it actually ran a search and evaluated positions on the fly. That’s why it counts as AI: it was computing moves, not just retrieving them.

[–] vacuumflower@lemmy.sdf.org 1 points 23 hours ago

A Prolog program is AI. Eliza is AI. AGI - sometime later.

[–] TheBlackLounge@lemmy.zip 10 points 1 day ago

That's like saying you shouldn't call artificial grass artificial grass cause it isn't grass. Nobody has a problem with that, why is it a problem for AI?

[–] very_well_lost@lemmy.world 11 points 1 day ago (1 children)

I don't know man... the "intelligence" that silicon valley has been pushing on us these last few years feels very artificial to me

[–] bitjunkie@lemmy.world 4 points 1 day ago

True. OP should have specified whether they meant the machines or the execs.

[–] frezik@lemmy.blahaj.zone 1 points 1 day ago

If you don't know what CSAIL is, and why one of the most important groups to modern computing is the MIT Model Railroading Club, then you should step back from having an opinion on this.

Steven Levy's 1984 book "Hackers" is a good starting point.

[–] prettybunnys@sh.itjust.works 1 points 1 day ago

I think we sooner learn humans don’t have the capacity for what we believe AGI and rather discover the limitations of what we know intelligence to be

[–] Kolanaki@pawb.social 1 points 1 day ago

Then things continue on as they have for the entire time humans have existed.

[–] cathfish@lemmy.world -3 points 1 day ago
load more comments
view more: ‹ prev next ›