this post was submitted on 27 Jul 2025
269 points (95.6% liked)

Technology

73379 readers
4237 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jj4211@lemmy.world 5 points 1 day ago (1 children)

I kid you not, early on (mid 2023) some guy mentioned using ChatGPT for his work and not even checking the output (he was in some sort of non-techie field that was still in the wheelhouse of text generation). I expresssed that LLMs can include some glaring mistakes and he said he fixed it by always including in his prompt "Do not hallucinate content and verify all data is actually correct.".

[–] Passerby6497@lemmy.world 4 points 1 day ago (1 children)

Ah, well then, if he tells the bot to not hallucinate and validate output there's no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.

[–] jj4211@lemmy.world 5 points 1 day ago (1 children)

It gave me flashbacks when the Replit guy complained that the LLM deleted his data despite being told in all caps not to multiple times.

People really really don't understand how these things work...

The people who make them don't really understand how they work either. They know how to train them and how the software works, but they don't really know how it comes up with the answers it comes up with. They just do a ron of trial and error. Correlation is all they really have. Which of course is how a lot of medical science works too. So they have good company.