this post was submitted on 26 Feb 2026
134 points (89.0% liked)
Technology
81933 readers
2908 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Or purely randomness, but the spirit of your point is sound. And if it is randomness it may be unique output, but the utility of that result may be zero.
100% AGREE. LLMs are not "thinking". LLMs are NOT the HAL 9000 from the movie 2001: A space odyssey
100% AGREE.
100% agree. The degeneration is already occurring because bad LLM output is being fed back in as authoritative training data resulting in confidently wrong answers being presented as truth. Critical thinking seems to have become an endangered species in the last 20 years and I'm really worried that people are trusting LLM chatbots completely and never challenging the things they output but instead accepting them as fact (and acting on those wrong things!).
I think we have some of the pieces today that will make AI in general more trustworthy in the future. Grounding can go part way to making today's LLMs more trustworthy. If an LLM claims something as fact, it should be able to produce the citation that supports it (outside of LLM output). That source can then be evaluated critically. Today's grounding doesn't go far enough though. An LLM today will say "I got that from HERE" and simply give a document. It won't show the page or line of text and supporting arguments that would justify its arrival at its stated output. It can't do these things today because I just described reasoning which is something an LLM is NOT capable of. So we wait for true AGI instead.