this post was submitted on 17 Mar 2025
499 points (96.6% liked)

Technology

66783 readers
4683 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
you are viewing a single comment's thread
view the rest of the comments
[–] blady_blah@lemmy.world 14 points 17 hours ago (2 children)

You say this like this is wrong.

Think of a question that you would ask an average person and then think of what the LLM would respond with. The vast majority of the time the llm would be more correct than most people.

[–] LifeInMultipleChoice@lemmy.dbzer0.com 17 points 17 hours ago (1 children)

A good example is the post on here about tax brackets. Far more Republicans didn't know how tax brackets worked than Democrats. But every mainstream language model would have gotten the answer right.

[–] smeenz@lemmy.nz 6 points 13 hours ago* (last edited 12 hours ago)

I bet the LLMs also know who pays tarrifs

[–] JacksonLamb@lemmy.world 8 points 14 hours ago (1 children)
[–] blady_blah@lemmy.world 0 points 10 hours ago (2 children)

Then asking it a logic question. What question are you asking that the llms are getting wrong and your average person is getting right? How are you proving intelligence here?

[–] JacksonLamb@lemmy.world 1 points 7 minutes ago

LLMs are an autocorrect.

Let's use a standard definition like "intelligence is the ability to acquire, understand, and use knowledge."

It can acquire (learn) and use (access, output) data but it lacks the ability to understand it.

This is why we have AI telling people to use glue on pizza or drink bleach.

I suggest you sit down with an AI some time and put a few versions of the Trolley Problem to it. You will likely see what is missing.

[–] eletes@sh.itjust.works 1 points 8 hours ago (2 children)

How many Rs are there in the word strawberry?

[–] blady_blah@lemmy.world 0 points 7 hours ago (1 children)

I asked gemini and ChatGPT (the free one) and they both got it right. How many people do you think would get that right if you didn't write it down in front of them? If Copilot gets it wrong, as per eletes' post, then the AI success rate is 66%. Ask your average person walking down the street and I don't think you would do any better. Plus there are a million questions that the LLMs would vastly out perform your average human.

[–] JacksonLamb@lemmy.world 1 points 13 minutes ago* (last edited 13 minutes ago)

I think you might know some really stupid or perhaps just uneducated people. I would expect 100% of people to know how many Rs there are in "strawberry" without looking at it.

Nevertheless, spelling is memory and memory is not intelligence.

[–] BlushedPotatoPlayers@sopuli.xyz -1 points 7 hours ago (1 children)

That was a very long time ago, that's fine now

[–] eletes@sh.itjust.works 4 points 7 hours ago

literally just asked copilot through our work subscription

I know it looks like I'm shitting on LLMs but really just trying to highlight they still have gaps on reasoning that they'll probably fix in this decade.