this post was submitted on 17 May 2024
503 points (94.8% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] nucleative@lemmy.world 5 points 6 months ago (1 children)

Well stated and explained. I'm not an AI researcher but I develop with LLMs quite a lot right now.

Hallucination is a huge problem we face when we're trying to use LLMs for non-fiction. It's a little bit like having a friend who can lie straight-faced and convincingly. You cannot distinguish whether they are telling you the truth or they're lying until you rely on the output.

I think one of the nearest solutions to this may be the addition of extra layers or observer engines that are very deterministic and trained on only extremely reputable sources, perhaps only peer reviewed trade journals, for example, or sources we deem trustworthy. Unfortunately this could only serve to improve our confidence in the facts, not remove hallucination entirely.

It's even feasible that we could have multiple observers with different domains of expertise (i.e. training sources) and voting capability to fact check and subjectively rate the LLMs output trustworthiness.

But all this will accomplish short term is to perhaps roll the dice in our favor a bit more often.

The perceived results from the end users however may significantly improve. Consider some human examples: sometimes people disagree with their doctor so they go see another doctor and another until they get the answer they want. Sometimes two very experienced lawyers both look at the facts and disagree.

The system that prevents me from knowingly stating something as true, despite not knowing, without some ability to back up my claims is my reputation and my personal values and ethics. LLMs can only pretend to have those traits when we tell them to.

[–] Voroxpete@sh.itjust.works 4 points 6 months ago

Consider some human examples: sometimes people disagree with their doctor so they go see another doctor and another until they get the answer they want. Sometimes two very experienced lawyers both look at the facts and disagree.

This actually illustrates my point really well. Because the reason those people disagree might be

  1. Different awareness of the facts (lawyer A knows an important piece of information lawyer B doesn't)
  2. Different understanding of the facts (lawyer might have context lawyer B doesn't)
  3. Different interpretation of the facts (this is the hardest to quantify, as its a complex outcome of everything that makes us human, including personality traits such as our biases).

Whereas you can ask the same question to the same LLM equipped with the same data set and get two different answers because it's just rolling dice at the end of the day.

If I sit those two lawyers down at a bar, with no case on the line, no motivation other than just friendly discussion, they could debate the subject and likely eventually come to a consensus, because they are sentient beings capable of reason. That's what LLMs can only fake through smoke and mirrors.