this post was submitted on 18 Apr 2026
145 points (85.7% liked)

Technology

83929 readers
2589 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Iconoclast@feddit.uk 0 points 19 hours ago* (last edited 16 hours ago) (1 children)

I'd say LLMs are not necessarily an indicator that we're close to AGI, but they're also not a non-indicator. Certaintly more of an indicator of it than the invention of the steam engine was. For narrowly intelligent systems, they're getting quite advanced. We're not there yet, but I worry that the moment we actually step into the zone of general intelligence might not be as obvious as one would think.

However, I also don't think there's any basis to make the absolute claim that LLMs will never lead there, because nobody could possibly know that with that degree of certainty.

And yeah, there are multiple ways to screw things up even with narrowly intelligent AI - we don't need AGI for that.

[โ€“] timwa@lemmy.snowgoons.ro 1 points 13 hours ago

I mean, I'm not particularly bothered about convincing anyone else, but personally I am absolutely 100% sure that no technology that is cogniscant of absolutely nothing but tokens of language (entirely arbitrary human language at that, far from any fundamental ground truth in itself), that is entirely incapable of discerning any actual meaning from that language other than which tokens appear likely to follow another, is absolutely never, under any circumstances, going to lead to AGI.

Yann LeCun is probably heading down a more realistic path to AGI with his world models - but for as long as my cat has a few orders of magnitude more synapses than Anthropic's most world beating model has parameters, I'm not going to get to stressed about that either.