this post was submitted on 07 Aug 2024
129 points (93.3% liked)
Technology
59627 readers
2935 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Maybe it can. If you find a way to port everything to text by hooking in different models, the LLM might be able to reason about everything you throw at it. Who even defines how AGI should be implemented?
Except LLMs don't actually have real reasoning capacity. Hooking in different models that can translate more of the world to text could give the LLM a broader domain, but not an entirely new ability beyond its architecture. That might make it more convincing, but it would still fail in the same ways as it currently does.
You're doing reasoning based on chemical reactions. Who says it can't do reasoning based on text? Who says it's not doing that already in some capacity? Can you prove that?
Is language conscious? Is it possible to "encode" human thinking into the media we produce?
Humans certainly "decode" ideas, knowledge, trains of logic and more from media, but does that mean the media contains the components of consciousness?
Is it possible to produce a machine that "decodes" not the content of media, but the process through which it was produced? Does media contain the latter in the first place?
How can you tell the difference if it does?
The more I learn about how modern machine learning actually works, the more certain I become that even if having a machine "decode" human media is the path to AGI, LLMs ain't it.
It just doesn't work in a way that would allow for a mind to arise.
Are atoms?
I don't know if LLMs of a large enough size can achieve (or sufficiently emulate) consciousness, but I do know that we barely know anything about consciousness, let alone it's limits.
Saying "we don't know, and it's complicated, therefore there's a chance, maybe, depending" is barely the beginning of an argument.