this post was submitted on 12 Jun 2024
393 points (95.4% liked)

Technology

59605 readers
3438 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] theherk@lemmy.world 0 points 5 months ago (3 children)

Can you show the question you asked that led to this and which model was used? I just tested in several models, even slightly older ones and they all answered precisely. Of course if you follow up and tell it the right answer is wrong you can make it say stuff like this, but not one got it wrong out of the gate.

[–] ChairmanMeow@programming.dev 8 points 5 months ago (2 children)

My point is that telling it a right answer is wrong often causes LLMs to completely shit the bed. They used to argue with you nonsensically, now they give you a different answer (often also wrong).

The only question missing at the start was "How many r's are there in the word 'veryberry'. I think raspberry also worked when I tried it. This was ChatGPT4-O. I did mark all the answers as bad, so perhaps they've fixed this one by now.

Still, it's remarkably trivial to get an LLM to provide a clearly non-human response.

[–] theherk@lemmy.world 1 points 5 months ago (1 children)

Fair enough, but it does somewhat undercut your message that every model I’ve tested including quite old ones answer this question correctly on the first try. This image is ChatGPT-4o.

[–] ChairmanMeow@programming.dev 7 points 5 months ago

Perhaps it was being influenced by the chat history. But try asking how many r's in raspberry, it does get that consistently wrong for me. And you can ask it those followup questions to easily get it to spout nonsense, and that was mostly my point; figuring out if you're talking to an LLM is fairly trivial.