this post was submitted on 04 Sep 2025
166 points (96.6% liked)
Technology
75513 readers
2614 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We'll almost certainly get to AGI eventually, but not through LLMs. I think any AI researcher could tell you this, but they can't tell the investors this.
Once we get to AGI it'll be nice to have an efficient llm so that the AGI can dream. As a courtesy to it.
Calling the errors "hallucinations" is kinda misleading because it implies there's regular real knowledge but false stuff gets mixed in. That's not how LLMs work.
LLMs are purely about word associations to other words. It's just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it's trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.
All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an "end" token.
Earlier on when using LLMs, I'd ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn't do. Its capabilities don't actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn't even have to reflect how it really works.
Yeah you're right, even in my cynicism I was still too hopeful for it LOL