this post was submitted on 15 Oct 2024
494 points (96.4% liked)

Technology

59495 readers
3041 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jabathekek@sopuli.xyz 206 points 1 month ago (27 children)
[–] WhatAmLemmy@lemmy.world 85 points 1 month ago (24 children)

The results of this new GSM-Symbolic paper aren't completely new in the world of AI researchOther recent papers have similarly suggested that LLMs don't actually perform formal reasoning and instead mimic it with probabilistic pattern-matching of the closest similar data seen in their vast training sets.

WTF kind of reporting is this, though? None of this is recent or new at all, like in the slightest. I am shit at math, but have a high level understanding of statistical modeling concepts mostly as of a decade ago, and even I knew this. I recall a stats PHD describing models as "stochastic parrots"; nothing more than probabilistic mimicry. It was obviously no different the instant LLM's came on the scene. If only tech journalists bothered to do a superficial amount of research, instead of being spoon fed spin from tech bros with a profit motive...

[–] ObviouslyNotBanana@lemmy.world 45 points 1 month ago (19 children)

It's written as if they literally expected AI to be self reasoning and not just a mirror of the bullshit that is put into it.

[–] Sterile_Technique@lemmy.world 39 points 1 month ago (2 children)

Probably because that's the common expectation due to calling it "AI". We're well past the point of putting the lid back on that can of worms, but we really should have saved that label for... y'know... intelligence, that's artificial. People think we've made an early version of Halo's Cortana or Star Trek's Data, and not just a spellchecker on steroids.

The day we make actual AI is going to be a really confusing one for humanity.

[–] JDPoZ@lemmy.world 11 points 1 month ago (2 children)

…a spellchecker on steroids.

Ask literally any of the LLM chat bots out there still using any headless GPT instances from 2023 how many Rs there are in “strawberry,” and enjoy. 🍓

[–] semperverus@lemmy.world 11 points 1 month ago

This problem is due to the fact that the AI isnt using english words internally, it's tokenizing. There are no Rs in {35006}.

[–] Sterile_Technique@lemmy.world 5 points 1 month ago (1 children)

That was both hilarious and painful.

And I don't mean to always hate on it - the tech is useful in some contexts, I just can't stand that we call it 'intelligence'.

[–] Pieisawesome@lemmy.world 3 points 1 month ago

LLMs don’t see words, they see tokens. They were always just guessing

load more comments (16 replies)
load more comments (20 replies)
load more comments (22 replies)