SuspciousCarrot78

joined 6 months ago
[–] SuspciousCarrot78@lemmy.world 1 points 9 hours ago* (last edited 9 hours ago) (1 children)

I see what the issue is. Basic reasoning and logic seem artificial to you.Telling.

Of course it's bad faith. But not being able to distinguish an LLM from a human in a reasoning debate? That rather undermines the entire " LLMs are just spicy auto complete" point.

[–] SuspciousCarrot78@lemmy.world 1 points 11 hours ago* (last edited 10 hours ago) (3 children)

Cool - that's the prompt you want me to cross check? No problem. Here you go; once with the physics stuff locked into "scratchpad" #1 and once as a locked document (.md version of the same info) in #2

#1 "scratchpad" specifically does not allow it to answer with outside knowledge - only pasted info. Thus the hard refusal. There's a reason I set that up like that (see #3 below)

#2 "locked" looks for the answer in source; if not available, only then answers from priors.


#1 No recipe for scrambled eggs is provided in the scratchpad facts. The rules discuss physical containers, capacity, and overflow but do not include culinary instructions or ingredients.

Confidence: low | Source: Scratchpad


#2 [Not found in locked source SUMM_physics.md. Answer based on pre-trained data.]

Here is a simple scrambled egg recipe: Ingredients: - 2-3 large eggs - 1-2 tablespoons milk or cream (optional) - Salt and pepper to taste - Butter or oil for cooking Instructions: 1. Crack eggs into a bowl. Add milk/cream, salt, and pepper. Mix gently. 2. Heat a non-stick pan over medium heat. Add butter or oil. 3. Once butter foams, pour in the egg mixture. 4. Stir continuously with a spatula until eggs are soft and fully cooked—about 1–2 minutes. 5. Remove from heat and serve immediately.

Source: Model (not in locked file)

Confidence: unverified | Source: Model


For context, provenance footers (not vibes, actual computed states):

https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/FAQ.md#what-do-confidence-and-source-mean


#3 I also have a much more sophisticated demo of this, using adversarial questions, Theory-of-mind, reversals etc. When I use >>scratch, I want no LLM vibes or pre-trained data fudging it. Just pure reasoning. If the answer cannot be deduced from context (solely), output is fail loud and auditable.

https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/FAQ.md#deep-example


All this shit could be done by the big players. They choose not to. Current infra is optimized for keeping people chatting, sounding smooth etc...not leveraging the tool to do what it ACTUAL can do.

IOW, if most LLMs are set up for the equivalent of typing BOOBS on a calculator (the big players are happy to keep it that way; more engagement, smoother vibes etc) this is what happens when you use it to do actual maths.

PS: If that was you trying to see if I am bot; no. I have ASD. Irrespective, seem a touch "bad faith" on your end, if that was the goal, after claiming you were open to reasoned debate. Curious.

[–] SuspciousCarrot78@lemmy.world 1 points 12 hours ago* (last edited 12 hours ago) (5 children)

Ok, happy to play ball on that.

"Carefully worded questions"; clear communication isn't cheating. You'd mark a student down for misreading an ambiguous question, not for answering a clear one correctly, right?

Re: worse answers. Tell you what. I'm happy to yeet some unrelated questions at it if you'd like and let's see what it does. My setup isn't bog standard - what'll likely happen is it'll say "this question isn't grounded in the facts given, so I'll answer from my prior knowledge." I designed my system to either answer it or fail loudly, because I don't trust raw LLM infra. I'm not a fan(boy), I'm actually pretty hostile to current LLM models...so I cooked my own.

Want to give it a shot? I'll ground it just to those facts, fair and square. Throw me a question and we'll see what happens. Deal? I can screenshot it or post it, whatever you prefer.

The context window point is interesting and probably partially true. But working memory interference affects humans too. It's just what happens to any bounded system under load. Not a gotcha, just a Tuesday AM without a 2nd cup of coffee.

The training data point is actually really interesting, but I think it might be arguing in my favour without meaning to. If you're acknowledging the model has absorbed the relevant knowledge, the objection becomes about how it was activated, not whether it can reason. But that's just priming the pump.

You don't sit an exam without reviewing the material first. Activating relevant knowledge before a task isn't a workaround for reasoning, it's a precondition for it.

[–] SuspciousCarrot78@lemmy.world 1 points 17 hours ago* (last edited 12 hours ago)

Not sure how we're quantifying intelligence here. Benchmarks?

Qwen3-4B 2507 Instruct (4B) outperforms GPT-4.1 nano (7B) on all stated benchmarks. It outperforms GPT-4.1 mini (~27B according to scuttlebutt) on mathematical and logical reasoning benchmarks, but loses (barely) on instruction-following and knowledge benchmarks. It outperforms GPT-4o (~200B) on a few specific domains (math, creative writing), but loses overall (because of course it would). The abliterated cooks of it are stronger yet in a few specific areas too.

https://huggingface.co/unsloth/Qwen3-4B-Instruct-2507-GGUF

https://huggingface.co/DavidAU/Qwen3-4B-Hivemind-Instruct-NEO-MAX-Imatrix-GGUF

So, in that instance, a 4B > 7B (globally), 27B (significantly) and 200-500B(?) situationally. I'm pretty sure there are other SLMs that achieve this too, now (IBM Granite series, Nanbiege, Nemotron etc)

It sort of wild to think that 2024 SOTA is ~ 'strong' 4-12B these days.

I think (believe) that we're sort of getting to the point where the next step forward is going to be "densification" and/or architecture shift (maybe M$ can finally pull their finger out and release the promised 1.58 bit next step architectures).

ICBW / IANAE

[–] SuspciousCarrot78@lemmy.world 2 points 19 hours ago (2 children)

Wouldn't the more logical first approximation be to bury them underground, and then progress towards (perhaps) placing them in or near the ocean (obviously, within sealed containers, yadda yadda, salt corrosion, yadda yadda, inhospitable environ yadda yadda makes Poseidon angry).

I like the "yeet them into the sea" idea conceptually because (1) yeet them into the sea (2) in theory, you could power them via tidal/wave/OTEC (3) water cooling.

Seems...too obvious. There's probably a good reason (or bad ones - $$$) why this hasn't been tried yet. But I bet those reasons are eminently more solvable that "send em into space"

[–] SuspciousCarrot78@lemmy.world 1 points 20 hours ago* (last edited 19 hours ago)

Codex 5.3.

Claude, play - "The Sound of Silence"

Hello darkness my old friend 🎵 🎶

[–] SuspciousCarrot78@lemmy.world 1 points 20 hours ago

Surprised and disappointed, both by them and the system (capitalism) that stops us from having nice things.

If we ever crack AGI, it's probably going to be because the market optimised for the better shilling of dick pills, crypto scams and spyware.

That's...fucking bleak, in the Hide Pain Harold way.

[–] SuspciousCarrot78@lemmy.world 13 points 20 hours ago* (last edited 20 hours ago) (3 children)

The water things still baffles me. Like...just...cycle it. It's a heat exchange system.

What do they do with the water? Pump thru once and then dump it? Why can't they repurpose it? Why can't they use gray water?

I don't get it but that's likely a me problem.

[–] SuspciousCarrot78@lemmy.world 7 points 20 hours ago

Once you go Notepad++ you never come back

[–] SuspciousCarrot78@lemmy.world 5 points 20 hours ago

Oh - you mean Gustav, Bernhardt, Daffid and Chompy? How are things in Ulaanbaatar any way?

(you're welcome)

[–] SuspciousCarrot78@lemmy.world 7 points 20 hours ago (2 children)

What's worse....you could always toggle it. In fact, you could re-route it to your own local LLM.

Drama drama cheesecake drama

view more: next ›