this post was submitted on 15 Oct 2024
494 points (96.4% liked)
Technology
59963 readers
3483 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I still believe they have the ability to reason to a very limited capacity. Everyone says that they're just very sophisticated parrots, but there is something emergent going on. These AIs need to have a world-model inside of themselves to be able to parrot things as correctly as they currently do (yes, including the hallucinations and the incorrect answers). Sure they are using tokens instead of real dictionary words, which comes with things like the strawberry problem, but just because they are not nearly as sophisticated as us doesnt mean there is no reasoning happening.
We are not special.
What's the strawberry problem? Does it think it's a berry? I wonder why
I think the strawberry problem is to ask it how many R's are in strawberry. Current AI gets it wrong almost every time.
That's because they don't see the letters, but tokens instead. A token can be one letter, but is usually bigger. So what the llm sees might be something like
When seeing it like that it's more obvious why the llm's are struggling with it