this post was submitted on 25 Apr 2026
270 points (97.5% liked)

Technology

84103 readers
2542 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ag10n@lemmy.world 20 points 23 hours ago (3 children)

You can run it on CPU alone. Not surprising they’re building their own AI ecosystem

[–] eager_eagle@lemmy.world 16 points 22 hours ago

It's still matrix multiplication. Running it on a general purpose CPU is inefficient.

[–] theunknownmuncher@lemmy.world 9 points 20 hours ago

I mean, sure. You could also run it by drawing marks in sand. It doesn't make any sense to do either, though.

[–] brucethemoose@lemmy.world 9 points 22 hours ago* (last edited 22 hours ago) (2 children)

Not at scale. Even on the new architecture, one really needs some kind of accelerator to make it economical for servers.

Bitnet-like models might change the calculus, but no major trainer had tried that yet.

[–] panda_abyss@lemmy.ca 8 points 22 hours ago* (last edited 22 hours ago) (1 children)

Even with a bitnet, it’s almost definitely better to train on a high precision float then refine down to bits.

I would expect bitnet to require more layers for equivalent quality too.

[–] brucethemoose@lemmy.world 3 points 21 hours ago

I just meant for mass inference serving.

Yeah, I haven’t seen much in the way of bitnet training savings yet, like regular old QAT. It does appear that Deepseek is finetuning their MoEs in a 4-bit format now, though.

[–] ag10n@lemmy.world 1 points 22 hours ago (2 children)

Yes, you can run it at scale. Which is why it uses Huawei hardware.

You can run it on anything, scaled or not

[–] brucethemoose@lemmy.world 6 points 21 hours ago* (last edited 21 hours ago) (1 children)

Just not power/cost efficiently on CPU only, is what I meant. CPUs don’t have the compute for batching (running generation requests in parallel). You need an accelerator, like Huawei’s, to be economical.

It’s fine for local inference, of course.

[–] ag10n@lemmy.world 2 points 20 hours ago (1 children)

A whole ecosystem that can run on any hardware, efficiently or not, is a whole ecosystem developed for the Chinese market

[–] brucethemoose@lemmy.world 1 points 4 hours ago* (last edited 4 hours ago)

…I mean, yeah? It’s obviously developed for the Chinese market.

But that’s theoretical, for now. No CPU backend I can find supports DSV4, and DeepSeek hasn’t contributed anything yet.

[–] theunknownmuncher@lemmy.world 2 points 20 hours ago* (last edited 20 hours ago) (1 children)

Nope! You don't know what you're talking about. At all. But you can have fun running a 1.6 trillion parameter model on CPU at basically 0 tokens per second at scale, MoE or not.

[–] ag10n@lemmy.world 1 points 18 hours ago (1 children)
[–] theunknownmuncher@lemmy.world 2 points 17 hours ago* (last edited 16 hours ago) (1 children)

You've proved my point that you don't know what you're talking about by blindly linking to the git repo. Couldn't find any source that supports your claim? I wonder why.

Sure you can serve one request at a time to one patient user at a slow token per second rate, which makes running locally viable, but there is no RAM that has the bandwidth to run this model at scale. Even flash would be incredibly slow on CPU with multiple requests. You'd need the high bandwidth of VRAM and to run across multiple GPUs in a scalable way, it requires extremely high bandwidth interconnects between GPUs.

[–] ag10n@lemmy.world -3 points 16 hours ago (1 children)

Thank you for proving my point. It can be run on a cpu

“It’s slow, it’s inefficient” it still runs

It’s a foundational model just like R1 was.

[–] theunknownmuncher@lemmy.world 3 points 16 hours ago* (last edited 16 hours ago) (1 children)

Yes, you can run it at scale.

at scale

Shift those goalposts! We went from "at scale" to "it still runs"

[–] ag10n@lemmy.world 1 points 16 hours ago (1 children)

Quote me in full.

You can run it at scale, on huawei. You can also run it on a cpu

[–] theunknownmuncher@lemmy.world 0 points 15 hours ago* (last edited 15 hours ago) (2 children)

Quote me in full.

Okay!

You can run at scale, on huawei. You can also run it on a cpu

Yeah, that is absolutely not what you argued.

Anyway, you've conceded that I'm correct that you cannot run it at scale on a CPU, because running on CPU is too slow and inefficient, and that they instead use GPU hardware like Huawei GPUs to run the model at scale. That's good enough for me!

[–] Diurnambule@jlai.lu 2 points 6 hours ago

Okey, then priced to just screenshot the part after the initial argument. Dude do more efforts.

[–] ag10n@lemmy.world 2 points 14 hours ago

Your interpretation of the English language has won you an argument! Huzzah

So good of you to concede it runs on cpu