this post was submitted on 11 Mar 2026
179 points (98.4% liked)

Technology

82549 readers
4472 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.

you are viewing a single comment's thread
view the rest of the comments
[–] SuspciousCarrot78@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (1 children)

Well...no. But also yes :)

Mostly, what I've shown is if you hold a gun to its head ("argue from ONLY these facts or I shoot") certain classes of LLMs (like the Qwen 3 series I tested; I'm going to try IBM's Granite next) are actually pretty good at NOT hallucinating, so long as 1) you keep the context small (probably 16K or less? Someone please buy me a better pc) and 2) you have strict guard-rails. And - as a bonus - I think (no evidence; gut feel) it has to do with how well the model does on strict tool calling benchmarks. Further, I think abliteration makes that even better. Let me find out.

If any of that's true (big IF), then we can reasonably quickly figure out (by proxy) which LLM's are going to be less bullshitty when properly shackled, in every day use. For reference, Qwen 3 and IBM Granite (both of which have abliterated version IIRC - that is, safety refusals removed) are known to score highly on tool calling. 4 swallows don't make spring but if someone with better gear wants to follow that path, then at least I can give some prelim data from the potato frontier.

I'll keep squeezing the stone until blood pours out. Stubbornness opens a lot of doors. I refuse to be told this is an intractable problem; at least until I try to solve it myself.

[–] andallthat@lemmy.world 2 points 1 day ago (1 children)

is "potato frontier" an auto-correct fail for Pareto or a real term? Because if it's not a real term, I'm 100% going to make it one!

[–] SuspciousCarrot78@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

No, it's real (tm). I'm running on a Quadro P1000 with 4GB vram (or a Tesla P4 with 8GB). My entire raison d'être is making potato tier computing a thing.

https://openwebui.com/posts/vodka_when_life_gives_you_a_potato_pc_squeeze_7194c33b

Like a certain famous space Lothario, I too do not believe in no win scenarios.