this post was submitted on 17 Mar 2025
529 points (96.8% liked)

Technology

66783 readers
4792 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
(page 2) 50 comments
sorted by: hot top controversial new old
[–] Schadrach@lemmy.sdf.org 1 points 5 hours ago

An LLM is roughly as smart as the corpus it is summarizing is accurate for the topic, because at their best they are good at creating natural language summarizers. Most of the main ones basically do an internet search and summarize the top couple of results, which means they are as good as the search engine backing them. Which is good enough for a lot of topics, but...not so much for the rest.

[–] futatorius@lemm.ee 30 points 12 hours ago (4 children)

Half of all voters voted for Trump. So an LLM might be smarter than them. Even a bag of pea gravel might be.

[–] UltraGiGaGigantic@lemmy.ml 1 points 5 hours ago (1 children)

Do you think the two party system properly represents the American people?

ChatGPT said:

The two-party system in the U.S. has both strengths and weaknesses when it comes to representing the American people. On one hand, it provides stability and clarity, with the two major parties—Democrats and Republicans—offering distinct platforms that can help simplify voter choice. The system also ensures that one of the two parties has a majority, making governance and passing legislation easier.

However, the two-party system can limit political diversity, as it often forces voters to choose between two parties that may not fully reflect their views. This can leave many people feeling underrepresented, particularly those with more nuanced or third-party preferences. It also tends to lead to a polarized political environment, where compromise and cooperation between different ideologies can be difficult.

In short, while the two-party system has worked for providing structure and stability in U.S. politics, it does have drawbacks in terms of broader representation, especially for those who don’t align neatly with either major party.

load more comments (1 replies)
[–] Psythik@lemm.ee 8 points 11 hours ago (2 children)

Less than a third of all voters voted for Trump. Most voters stayed home.

[–] kent_eh@lemmy.ca 1 points 10 hours ago* (last edited 4 hours ago)

If you didn't vote then you're not a voter.

Most eligable voters stayed home

load more comments (1 replies)
load more comments (2 replies)
[–] conditional_soup@lemm.ee 8 points 10 hours ago (2 children)

This is sad. This does not spark joy. We're months from someone using "but look, ChatGPT says..." To try to win an argument. I can't wait to spend the rest of my life explaining to people that LLMs are really fancy bullshit generator toys.

[–] jj4211@lemmy.world 5 points 10 hours ago

Already happened in my work. People swearing an API call exists because an LLM hallucinated it. Even as the people who wrote the backend tells them it does not exist

load more comments (1 replies)
[–] communism@lemmy.ml 8 points 11 hours ago

Given the US adults I see on the internet, I would hazard a guess that they're right.

[–] LovableSidekick@lemmy.world 2 points 8 hours ago

I'm surprised it's not way more than half. Almost every subjective thing I read about LLMs oversimplifies how they work and hugely overstates their capabilities.

[–] KingThrillgore@lemmy.ml 24 points 14 hours ago

The average literacy level is around that of a sixth grader.

This tracks

[–] Feathercrown@lemmy.world 17 points 14 hours ago

I believe LLMs are smarter than half of US adults

[–] tvbusy@lemmy.dbzer0.com 14 points 14 hours ago

LLM is proof that even if you're extremely stupid, having access to information can still make you sound smart.

[–] curiousaur@reddthat.com 4 points 10 hours ago* (last edited 9 hours ago) (1 children)

This is hard to quantify. I use them constantly throughout my work day now.

Are they smarter than me? I'm not sure. Haven't thought too much about it.

What they certainly are, and by a long shot, is faster. Given a set of data, I could analyze it and pull out insights and conclusions. It might take me a week or a month depending on the size and breadth of the data set. An LLM can pull out insights and conclusions in seconds.

I can read error stacks coming from my code, but before I've even read the first few lines the LLM has ingested all of them, checked the code, and reached a conclusion about the necessary fix. Is it right, optimal, and avoid creating other bugs? Like 75% at this point. I can coax it, interate on the solution my self, or do it entirely myself with the understanding of the bug that it granted me. This same bug might have taken hours to figure out myself.

My point is, I'm not sure how to compare smarter vs orders of magnitude faster.

[–] fyzzlefry@retrolemmy.com 5 points 10 hours ago

Are you smarter than a calculator?

[–] CalipherJones@lemmy.world 3 points 10 hours ago (1 children)

AI is essentially the human superid. No one man could ever be more knowledgeable. Being intelligent is a different matter.

[–] Waraugh@lemmy.dbzer0.com 3 points 10 hours ago (6 children)

Is stringing words together really considered knowledge?

[–] CalipherJones@lemmy.world 2 points 9 hours ago

If they're strung together correctly then yeah.

[–] ILikeBoobies@lemmy.ca 1 points 9 hours ago

As much as a search engine is

load more comments (4 replies)
[–] MrAlternateTape@lemm.ee 20 points 15 hours ago (1 children)

That is the problem with US adults. Half of them probably is dumber than AI.....

[–] Flocklesscrow@lemm.ee 14 points 14 hours ago

The grammatical error here is chef's kiss.

[–] ZephyrXero@lemmy.world 3 points 11 hours ago

What a very unfortunate name for a university.

[–] rottingleaf@lemmy.world 15 points 16 hours ago

That's called a self-proving statement.

[–] Grizzlyboy@lemm.ee 37 points 19 hours ago (1 children)

It’s like asking if you think a calculator is smarter than you.

[–] Dayroom7485@lemmy.world 19 points 19 hours ago (2 children)

„It‘s totally a lot smarter than I am, no way could I deliver (234 * 534)^21 as confidently!“

load more comments (2 replies)
[–] tad_lispy@lemm.ee 16 points 17 hours ago

If we are talking about American adults, I guess they might be right.

[–] interested_party@lemmy.org 2 points 11 hours ago

It's probably true too.

[–] Naevermix@lemmy.world 14 points 17 hours ago (1 children)

Hallucination comes off as confidence. Very human like behavior tbh.

[–] portifornia@lemmy.world 1 points 5 hours ago

I should be more confident when communicating my hallucinations, it humanizes me.

[–] JackFrostNCola@lemmy.world 32 points 20 hours ago

"Half of LLM users " beleive this. Which is not to say that people who understand how flawed LLMs are, or what their actual function is, do not use LLMs and therefore arent i cluded in this statistic?
This is kinda like saying '60% of people who pay for their daily horoscope beleive it is an accurate prediction'.

[–] DarrinBrunner@lemmy.world 4 points 13 hours ago (1 children)

Intelligence and knowledge are two different things. Or, rather, the difference between smart and stupid people is how they interpret the knowledge they acquire. Both can acquire knowledge, but stupid people come to wrong conclusions by misinterpreting the knowledge. Like LLMs, 40% of the time, apparently.

[–] ZephyrXero@lemmy.world 2 points 11 hours ago

My new mental model for LLMs is that they're like genius 4 year olds. They have huge amounts of information, and yet have little to no wisdom as to what to do with it or how to interpret it.

load more comments
view more: ‹ prev next ›