this post was submitted on 08 Apr 2026
254 points (97.0% liked)
Technology
83600 readers
3974 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It doesn’t really matter whether it’s the Machine or the creator.
The point is, AIs can be programmed to lie, much like Grok does. And if they can be programmed to lie, then they are not reliable for anything at all. We are going through a decent period where AI can be used for a few things reliably, but even these will surely be enshittified.
It matters because every time we anthropomorphize Generative AI LLM'S we re-enforce peoples belief in their ability to tell lies or truths.
People's believe is what leads to trust in them and things like AI psychosis.
An interesting way to look at it is AI also can't tell the truth.
What it does is generate the next likely word or words based on its most significant statistical positive in its database. So it doesn't know anything. It doesn't tell truth. It doesn't tell lies. It isn't an entity. The people behind it are allowing it to present information as factual and we have no reason to trust them.
Oooh, philosophy! I disagree. I think that if a person programs a LLM to give disinformation, that's all it is. A lie giving misinformation knowing that's it's disinformation, intending do deceive. The LLM doesn't know what's true or false. It doesn't intend anything, because it is not a conscious entity. The person who programmed it can be lying by disseminating false information, the LLM cannot, any more than a broken clock or thermometer is 'lying' about the time or temperature.
I am trying to get away from the philosophy actually 😅 in the end what matters is how these tools are being used, not so much their inherent characteristics.
Can you envision a world where AI chatbots will be used to lead you down certain political beliefs (e.g. capitalism good, socialism bad) product recommendations will be made based on how much brands are willing to pay for ad placements, and your psychological state will be measured and molded to the interests of the AI owner? I can. It’s also already happening.