this post was submitted on 27 May 2024
1102 points (98.0% liked)
Technology
59605 readers
3501 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Good lord what is wrong with the people in this thread. The guy is literally owning up to the hard limitations of LLMs. I'm not a fan of him or Google either, but hey kudos for being honest this once. The entire industry would be better off if we didn't treat LLMs like something they're not. More of this please!
IMO these issues are mainly with the interface / how the AI summaries are presented.
The issue with incorrect answers like the glue on pizza one isnt "hallucination". The LLM is pulling that info from an existing webpage (The Onion). The thing they need to change is how that info is portrayed. Not "one tip is to use glue", but rather "the satirical site the Onion says to use glue".
Hallucination should be combatted by the fact that the AI cant show a proper source for facts it made up itself.
I think you nailed it. That's exactly why I want more of this type of conversation. Before we can innovate we have to acknowledge the limitations of the technology.