this post was submitted on 23 Feb 2026
539 points (97.0% liked)

Technology

81802 readers
6385 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the 'reasoning' models.

you are viewing a single comment's thread
view the rest of the comments
[–] Hazzard@lemmy.zip 8 points 5 hours ago (2 children)

They also polled 10,000 people to compare against a human baseline:

Turns out GPT-5 (7/10) answered about as reliably as the average human (71.5%) in this test. Humans still outperform most AI models with this question, but to be fair I expected a far higher "drive" rate.

That 71.5% is still a higher success rate than 48 out of 53 models tested. Only the five 10/10 models and the two 8/10 models outperform the average human. Everything below GPT-5 performs worse than 10,000 people given two buttons and no time to think.

[–] architect@thelemmy.club 1 points 59 minutes ago* (last edited 56 minutes ago)

The question is based on assumptions. That takes advanced reading skills. I’m surprised it was 71% passing, to be honest. (The humans, that is)

[–] Modern_medicine_isnt@lemmy.world 1 points 5 hours ago (1 children)

This here is the point most people fail to grasp. The AI was taught by people. And people are wrong a lot of the time. So the AI is more like us than what we think it should be. Right down to it getting the right answer for all the wrong reasons. We should call it human AI. Lol.

[–] NewNewAugustEast@lemmy.zip 0 points 4 hours ago (1 children)

Like I said the person above, there is no wrong answer. Its all about assumptions. It is a stupid trick question that no one would ask.

[–] Modern_medicine_isnt@lemmy.world 2 points 1 hour ago (1 children)

Well I did interview at Microsoft once a long time ago. They did ask some stupid questions... lol

[–] NewNewAugustEast@lemmy.zip 1 points 1 hour ago

LOL! That is a great answer.

I have a Microsoft story. I know some one who was hired to stop them from continuing an open source project. They gave them a good salary, stock options, and an office with a fully stocked bar. They said do whatever you want, they figured they would get a good developer and kill the open source competition (back in the Ballmer days).

Sadly, given money, no real ambition to create closed source software, they mostly spent their days in their office and basically drank themselves to death.

Microsoft just kills everything it touches.