this post was submitted on 10 May 2024
340 points (98.6% liked)
Technology
59569 readers
3825 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is exactly what I'm talking about when I argue with people who insist that an LLM is super complex and totally is a thinking machine just like us.
It's nowhere near the complexity of the human brain. We are several orders of magnitude more complex than the largest LLMs, and our complexity changes with each pulse of thought.
The brain is amazing. This is such a cool image.
I agree, but it isn't so clear cut. Where is the cutoff on complexity required? As it stands, both our brains and most complex AI are pretty much black boxes. It's impossible to say this system we know vanishingly little about is/isn't dundamentally the same as this system we know vanishingly little about, just on a differentscale. The first AGI will likely still have most people saying the same things about it, "it isn't complex enough to approach a human brain." But it doesn't need to equal a brain to still be intelligent.
It's demonstrably several orders of magnitude less complex. That's mathematically clear cut.
Philosophical question without an answer - We do know that it's nowhere near the complexity of the brain.
There are many things we cannot directly interrogate which we can still describe.
It's entirely possible to say that because we know the fundamental structures of each, even if we don't map the entirety of eithers complexity. We know they're fundamentally different - Their basic behaviors are fundamentally different. That's what fundamentals are.
Speculation but entirely possible. We're nowhere near that though. There's nothing even approaching intelligence in LLMs. We've never seen emergent behavior or evidence of an id or ego. There's no ongoing thought processes, no rationality - because that's not what an LLM is. An LLM is a static model of raw text inputs and the statistical association thereof. Any "knowledge" encoded in an LLM exists entirely in the encoding - It cannot and will not ever generate anything that wasn't programmed into it.
It's possible that an LLM might represent a single, tiny, module of AGI in the future. But that module will be no more the AGI itself than you are your cerebellum.
First thing I think we agree on.