this post was submitted on 12 Jun 2024
393 points (95.4% liked)

Technology

59589 readers
2891 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Deconceptualist@lemm.ee 46 points 5 months ago (16 children)

As others are saying it's 100% not possible because LLMs are (as Google optimistically describes) "creative writing aids", or more accurately, predictive word engines. They run on mathematical probability models. They have zero concept of what the words actually mean, what humans are, or even what they themselves are. There's no "intelligence" present except for filters that have been hand-coded in (which of course is human intelligence, not AI).

"Hallucinations" is a total misnomer because the text generation isn't tied to reality in the first place, it's just mathematically "what next word is most likely".

https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

[–] QuantumSoul@lemmy.dbzer0.com -1 points 5 months ago (4 children)

They do have internal concepts though: https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation

Probably not of what a human is, but thought process is needed for better text generarion and is therefore emergent in their neural net

[–] Natanael@slrpnk.net 4 points 5 months ago (1 children)

The problem is they have many different internal concepts with conflicting information and no mechanism for determining truthfulness or for accuracy or for pruning bad information, and will sample them all randomly when answering stuff

load more comments (2 replies)
load more comments (13 replies)