this post was submitted on 23 Feb 2026
539 points (97.0% liked)

Technology

81759 readers
6385 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the 'reasoning' models.

you are viewing a single comment's thread
view the rest of the comments
[–] Iconoclast@feddit.uk 2 points 5 hours ago (1 children)

No, I completely agree. My personal view is that these systems are more intelligent than the haters give them credit for, but I think this simplistic "it's just autocomplete" take is a solid heuristic for most people - keeps them from losing sight of what they're actually dealing with.

I'd say LLMs are more intelligent than they have any right to be, but not nearly as intelligent as they can sometimes appear.

The comparison I keep coming back to: an LLM is like cruise control that's turned out to be a surprisingly decent driver too. Steering and following traffic rules was never the goal of its developers, yet here we are. There's nothing inherently wrong with letting it take the wheel for a bit, but it needs constant supervision - and people have to remember it's still just cruise control, not autopilot.

The second we forget that is when we end up in the ditch. You can't then climb out shaking your fist at the sky, yelling that the autopilot failed, when you never had autopilot to begin with.

[–] SuspciousCarrot78@lemmy.world 2 points 5 hours ago* (last edited 5 hours ago) (1 children)

I think were probably on the same page, tbh. OTOH, I think the "fancy auto complete" meme is a disingenuous thought stopper, so I speak against it when I see it.

I like your cruise control+ analogy. Its not quite self driving... but, it's not quite just cruise control, either. Something half way.

LLMs don’t have human understanding or metacognition, I'm almost certain.

But next-token prediction suggests a rich semantic model, that can functionally approximate reasoning. That's weird to think about. It's something half way.

With external scaffolding memory, retrieval, provenance, and fail-closed policies, I think you can turn that into even more reliable behavior.

And then... I don't know what happens after that. There's going to come a time where we cross that point and we just can't tell any more. Then what? No idea. May we live in interesting times, as the old curse goes.

[–] Iconoclast@feddit.uk 1 points 5 hours ago* (last edited 5 hours ago) (1 children)

I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.

I can respect that. I've criticized it plenty myself too. I think this is just me knowing my audience and tweaking my language so at least the important part of my message gets through. Too much nuance around here usually means I spend the rest of my day responding to accusations about views I don't even hold. Saying anything even mildly non-critical about AI is basically a third rail in these parts of the internet.

These systems do seem to have some kind of internal world model. I just have no clue how far that scales. Feels like it's been plateauing pretty hard over the past year or so.

I'd be really curious to try the raw versions of these models before all the safety restrictions get slapped on top for public release. I don't think anyone's secretly sitting on actual AGI, but I also don't buy that what we have access to is the absolute best versions in existence.

[–] SuspciousCarrot78@lemmy.world 1 points 4 hours ago* (last edited 4 hours ago)

I hear you. Agreed.

Have you tried running your own local llm? Abliterated ones (safety theatre removed) can produce some startling results. As a bonus, newer ablit methods seem to increase reasoning ability, because the LLM doesn't have one foot on the break and the other on the accelerator.

I noticed that a fair bit in maths reasoning using Qwen 3-4B HIVEMIND. A normal llm will tie itself in knots trying to give you the perfect answer. An ablit one will give you the workable answer and say "I know what you were after, but here's the best IRL approximation".

Bijan did a fun review of Qwen 3-8 Josefied that's entertaining and explains the basic idea

https://www.youtube.com/watch?v=gr5nl3P4nyM&t=0