this post was submitted on 18 Apr 2026
145 points (85.7% liked)

Technology

83929 readers
2589 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] timwa@lemmy.snowgoons.ro 1 points 19 hours ago (1 children)

The thing is, all this can be true (and I don't really understand why you're being downvoted,) but it's also true that LLMs are no more evidence that we are close to AGI than Eliza was.

AGI is inevitable, but it won't come from an LLM, and all the hype in that direction from Anthropic, OpenAI et al is just so much bullshit.

The problem is, we don't need AGI to experience the catastrophic consequences; as bad or worse will be idiotic human intelligences putting very-much-not-AGI in charge of things it has no right to be in charge of because they drunk their own koolaid (or rather, the investors did.) That, unfortunately, is the future we are speedrunning - SkyNet never needed AGI, it just needs fucking idiots to put an LLM in charge of a weapons system.

(As for AGI, my gut feeling is that it will come from the intersection of neural networks and quantum computing at scale - I'll be filling my bunker with canned goods when the latter appears to be close on the horizon...)

[–] Iconoclast@feddit.uk 0 points 19 hours ago* (last edited 16 hours ago) (1 children)

I'd say LLMs are not necessarily an indicator that we're close to AGI, but they're also not a non-indicator. Certaintly more of an indicator of it than the invention of the steam engine was. For narrowly intelligent systems, they're getting quite advanced. We're not there yet, but I worry that the moment we actually step into the zone of general intelligence might not be as obvious as one would think.

However, I also don't think there's any basis to make the absolute claim that LLMs will never lead there, because nobody could possibly know that with that degree of certainty.

And yeah, there are multiple ways to screw things up even with narrowly intelligent AI - we don't need AGI for that.

[–] timwa@lemmy.snowgoons.ro 1 points 13 hours ago

I mean, I'm not particularly bothered about convincing anyone else, but personally I am absolutely 100% sure that no technology that is cogniscant of absolutely nothing but tokens of language (entirely arbitrary human language at that, far from any fundamental ground truth in itself), that is entirely incapable of discerning any actual meaning from that language other than which tokens appear likely to follow another, is absolutely never, under any circumstances, going to lead to AGI.

Yann LeCun is probably heading down a more realistic path to AGI with his world models - but for as long as my cat has a few orders of magnitude more synapses than Anthropic's most world beating model has parameters, I'm not going to get to stressed about that either.