this post was submitted on 09 Feb 2026
340 points (99.7% liked)
Technology
80916 readers
3894 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Anyone who have knowledge about a specific subject says the same: LLM'S are constantly incorrect and hallucinate.
Everyone else thinks it looks right.
That’s not what the study showed though. The LLMs were right over 98% of the time…when given the full situation by a “doctor”. It was normal people who didn’t know what was important that were trying to self diagnose that were the problem.
Hence why studies are incredibly important. Even with the text of the study right in front of you, you assumed something that the study did not come to the same conclusion of.
Yep its why CLevels think its the Holy Grail they don't see it as everything that comes out of their mouth is bullshit as well. So they don't see the difference.
A talk on LLMs I was listening to recently put it this way:
If we hear the words of a five-year-old, we assume the knowledge of a five-year-old behind those words, and treat the content with due suspicion.
We're not adapted to something with the "mind" of a five-year-old speaking to us in the words of a fifty-year-old, and thus are more likely to assume competence just based on language.
LLMs don't have the mind of a five year old, though.
They don't have a mind at all.
They simply string words together according to statistical likelihood, without having any notion of what the words mean, or what words or meaning are; they don't have any mechanism with which to have a notion.
They aren't any more intelligent than old Markov chains (or than your average rock), they're simply better at producing random text that looks like it could have been written by a human.
What gives you the confidence that you don't do the same?
I am aware of that, hence the ""s. But you're correct, that's where the analogy breaks. Personally, I prefer to liken them to parrots, mindlessly reciting patterns they've found in somebody else's speech.
It is insane to me how anyone can trust LLMs when their information is incorrect 90% of the time.