this post was submitted on 28 Mar 2026
273 points (96.9% liked)

Technology

83150 readers
3487 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Internetexplorer@lemmy.world 5 points 10 hours ago (1 children)

AI can be convincing, and it will swear until it's blue in the face that something is right and then just be completely wrong.

But that happens maybe 10% of the time. Other times it is mostly right.

So got to be careful. This guy was in his 50's, out of work, smoking marijuana, depressed, feeling isolated. It was ripe for a catastrophe, with AI hallucinating a crappy idea and the end user just completely running with it.

[–] IratePirate@feddit.org 8 points 4 hours ago (3 children)

AI can [...] be completely wrong. But that happens maybe 10% of the time.

Where are you pulling your numbers from, mate? The figures I've seen so far start somewhere >40% and go all the way up to 70%.

[–] aesthelete@lemmy.world 1 points 26 minutes ago* (last edited 26 minutes ago)

There's a kind of law here that should be named IMO when dealing with LLMs:

In a long enough interaction with an LLM the probability that it generates a very incorrect, borderline insane response approaches 100%.

[–] xthexder@l.sw0.com 1 points 2 hours ago

I think part of the difference is the amount of output being measured. Maybe a single statement has a 10% chance of being wrong, but over the course of a whole response the likelihood of there being an incorrect statement goes up. After only 5 statements at 10% error, that's a 40% chance of being wrong in some way.

I don't have any real numbers, just personal experience using AI for programming at work, and all of these numbers (10%, 40%, 70%) seem plausible depending on exactly what you're measuring.

[–] hanrahan@slrpnk.net 2 points 3 hours ago (1 children)

so..a bit like economists then ?

[–] IratePirate@feddit.org 2 points 3 hours ago

Not if we're talking Jim Cramer, who is well beyond 70%.