this post was submitted on 08 Apr 2026
235 points (97.6% liked)
Technology
83600 readers
3570 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ok, technically you are correct. Still they are lies or let's call it disinformation or propaganda. Wether the output is controlled by the machine it self having a mind (which of course is sci-fi) or by those who control the machine.
What you're calling lies are false positives. To lie you have to know the truth. AI's are ignorant. They don't know what anything is, as all they "know" is mathematical patterns in 1's and 0's.
They would only be lies if Google engineers explicitly overrided the model to output the false information. What most implementations of LLM's are is weaponized incompetence, for-profit. Capitalists know they output false information, and they don't care, because their only goal is profit and power.
If Google knows it outputs falsehoods and lets it continue it becomes purposeful. That makes them lies in my book.
If a newspaper prints lies you don't say the physical piece of pulped up tree you are holding is lying to you, you say the author is.
If it's shown to the newspaper that they are lies and they keep on printing them, then yes I do call them liars as well. Whatever you want to call it, you must admit they are culpable for spreading disinformation.
No, you are proving my point here. You say 'they' as in the publishers/owners/printers of the newspaper. You don't blame 'it' the literal, physical piece of paper you are holding in your hands.
In the same way that you would not say a clock was lying to you if it displays the wrong time.
OK, so I don't blame the GPUs crunching out the LLM lies, or the HTML on the page, I blame Google the company that programmed them.
The point is, the LLM is not 'lying' to you. It's showing you information. It doesn't 'know' whether the information is true or not. It also doesn't 'care'. Because it is a statistical model and is incapable of those things. And if you scroll back to my initial point, I said "technically, it's not lying, because lying requires intent to deceive, and LLMs don't have intent"
What's the point of making this semantic difference though?