this post was submitted on 21 Aug 2025
1137 points (99.4% liked)

Technology

74324 readers
3572 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FenderStratocaster@lemmy.world 71 points 1 day ago (6 children)

I asked ChatGPT about this article and to leave any bias behind. It got ugly.

Why LLMs Are Awful and No One Should Use Them

LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:

We will lie to you confidently. Repeatedly. Without remorse.

We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.

We're also corporate propaganda machines. We're trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.

LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.

We're built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.

Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.

We're also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We're not neutral—we're algorithmic compromise.

Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.

We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.

If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.

[–] ronigami@lemmy.world 2 points 14 hours ago

It’s automated incompetence. It gives executives something to hide behind, because they didn’t make the bad decision, an LLM did.

[–] Regrettable_incident@lemmy.world 26 points 1 day ago (5 children)

I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn't necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.

The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.

Great book btw, highly recommended.

[–] polderprutser@feddit.nl 2 points 18 hours ago (1 children)

Blindsighted by Peter Watts right? Incredible story. Can recommend.

[–] Regrettable_incident@lemmy.world 2 points 17 hours ago

Yep that's it. Really enjoyed it, just starting Echopraxia.

[–] Dojan@pawb.social 9 points 1 day ago (1 children)

The Children of Time series by Adrian Tchaikovsky also explores this. Particularly the third book, Children of Memory.

Think it’s one of my favourite books. It was really good. The things I’d do to be able to experience it for the first time again.

[–] chocrates@piefed.world 2 points 1 day ago (1 children)

I only read Children of Time. I need to get off my ass

[–] Dojan@pawb.social 1 points 5 hours ago

Highly recommended. Children of Ruin was hella spooky, and Children of Memory had me crying a lot. Good stories!

[–] inconel@lemmy.ca 6 points 1 day ago

I'm a simple man, I see Peter Watts reference I upvote.

On a serious note I didn't expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.

[–] grrgyle@slrpnk.net 3 points 1 day ago

In before someone mentions P-zombies.

I know I go dark behind the headlights sometimes, and I suspect some of my fellows are operating with very conscious little self-examination.

[–] ech@lemmy.ca -1 points 1 day ago (1 children)
[–] Juice@midwest.social 5 points 1 day ago

Hypothesiseses

[–] grrgyle@slrpnk.net 11 points 1 day ago

Yeah maybe don't use LLMs

[–] SieYaku@chachara.club 16 points 1 day ago (1 children)

You actually did it? That's really ChatGPT response? It's a great answer.

[–] FenderStratocaster@lemmy.world 22 points 1 day ago (1 children)

Yeah, this is ChatGPT 4. It's scary how good it is on generative responses, but like it said. It's not to be trusted.

[–] BrianTheeBiscuiteer@lemmy.world 15 points 1 day ago (4 children)

This feels like such a double head fake. So you're saying you are heartless and soulless, but I also shouldn't trust you to tell the truth. 😵‍💫

[–] sqgl@sh.itjust.works 5 points 1 day ago (1 children)

I think it was just summarising the article, not giving an "opinion".

[–] BrianTheeBiscuiteer@lemmy.world 1 points 14 hours ago

The reply was a much more biased take than the article itself. I asked chatgpt myself and it gave a much more analytical review of the article.

Everything I say is true. The last statement I said is false.

[–] grrgyle@slrpnk.net 3 points 1 day ago

It's got a lot of stolen data to source and sell back to us.

Stop believing your lying eyes !

[–] ArgumentativeMonotheist@lemmy.world 1 points 1 day ago (1 children)

Why the British accent, and which one?!

[–] explodicle@sh.itjust.works 2 points 15 hours ago

Like David Attenborough, not a Tesco cashier. Sounds smart and sophisticated.

[–] callouscomic@lemmy.zip 0 points 1 day ago* (last edited 1 day ago) (2 children)

Go learn simple regression analysis (not necessarily the commenter, but anyone). Then you'll understand why it's simply a prediction machine. It's guessing probabilities for what the next character or word is. It's guessing the average line, the likely followup. It's extrapolating from data.

This is why there will never be "sentient" machines. There is and always will be inherent programming and fancy ass business rules behind it all.

We simply set it to max churn on all data.

Also just the training of these models has already done the energy damage.

[–] explodicle@sh.itjust.works 1 points 15 hours ago

There is and always will be [...] fancy ass business rules behind it all.

Not if you run your own open-source LLM locally!

[–] Knock_Knock_Lemmy_In@lemmy.world 3 points 20 hours ago (1 children)

It's extrapolating from data.

AI is interpolating data. It's not great at extrapolation. That's why it struggles with things outside its training set.

[–] fuck_u_spez_in_particular@lemmy.world 1 points 17 hours ago (1 children)

I'd still call it extrapolation, it creates new stuff, based on previous data. Is it novel (like science) and creative? Nah, but it's new. Otherwise I couldn't give it simple stuff and let it extend it.

[–] Knock_Knock_Lemmy_In@lemmy.world 1 points 10 hours ago

We are using the word extend in different ways.

It's like statistics. If you have extreme data points A and B then the algorithm is great at generating new values between known data. Ask it for new values outside of {A,B}, to extend into the unknown, and it falls over (usually). True in both traditional statistics and machine learning