this post was submitted on 28 Jul 2025
-46 points (16.2% liked)
Technology
73342 readers
4800 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
sigh this isn't how any of this works. Repeat after me: LLMs. ARE. NOT. INTELLIGENT. They have no reasoning ability and have no intent. They are parroting statistically-likely sequences of words based on often those sequences of words appear in their training data. It is pure folly to assign any kind of agency to them. This is speculative nonsense with no basis in actual technology. It's purely in the realm of science fiction.
Claims like this just create more confusion and lead to people saying things like “LLMs aren’t AI.”
LLMs are intelligent - just not in the way people think.
Their intelligence lies in their ability to generate natural-sounding language, and at that they’re extremely good. Expecting them to consistently output factual information isn’t a failure of the LLM - it’s a failure of the user’s expectations. LLMs are so good at generating text, and so often happen to be correct, that people start expecting general intelligence from them. But that’s never what they were designed to do.
So they are not intelligent, they just sound like they're intelligent... Look, I get it, if we don't define these words, it's really hard to communicate.
It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.