this post was submitted on 23 Feb 2026
539 points (97.0% liked)

Technology

81802 readers
6385 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the 'reasoning' models.

you are viewing a single comment's thread
view the rest of the comments
[–] WraithGear@lemmy.world 43 points 5 hours ago* (last edited 5 hours ago) (1 children)

and what is going to happen is that some engineer will band aid the issue and all the ai crazy people will shout “see! it’s learnding!” and the ai snake oil sales man will use that as justification of all the waste and demand more from all systems

just like what they did with the full glass of wine test. and no ai fundamentally did not improve. the issue is fundamental with its design, not an issue of the data set

[–] turmacar@lemmy.world 6 points 3 hours ago* (last edited 3 hours ago)

Half the issue is they're calling 10 in a row "good enough" to treat it as solved in the first place.

A sample size of 10 is nothing.

Frankly would like to see some error bars on the "human polling". How many people rapiddata is polling are just hitting the top or bottom answer?