this post was submitted on 17 May 2024
503 points (94.8% liked)
Technology
59569 readers
4136 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn't exist in the physical world. Humans hallucinate too - all the time. It's just that our approximations are usually correct, and then we don't call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It's also why we don't notice our blinks, or why we don't see the blind spot our eyes have.
AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.
Hallucinations shouldn't be treated like a bug. They are a feature - just not one the big tech companies wanted.
When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.
Could not have said it better. The whole reason contemporary programs haven't been able to adapt to the ambiguity of real world situations is because they require rigidly defined parameters to function. LLMs and AI make assumptions and act on shaky info - That's the whole point. If people waited for complete understanding of every circumstance and topic, we'd constantly be trapped in indecision. Without the ability to test their assumptions in the real world, LLMs will be like children.