this post was submitted on 25 Apr 2026
247 points (97.3% liked)

Technology

84103 readers
2653 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ag10n@lemmy.world 2 points 14 hours ago (1 children)
[–] theunknownmuncher@lemmy.world 2 points 14 hours ago* (last edited 13 hours ago) (1 children)

You've proved my point that you don't know what you're talking about by blindly linking to the git repo. Couldn't find any source that supports your claim? I wonder why.

Sure you can serve one request at a time to one patient user at a slow token per second rate, which makes running locally viable, but there is no RAM that has the bandwidth to run this model at scale. Even flash would be incredibly slow on CPU with multiple requests. You'd need the high bandwidth of VRAM and to run across multiple GPUs in a scalable way, it requires extremely high bandwidth interconnects between GPUs.

[–] ag10n@lemmy.world -2 points 12 hours ago (1 children)

Thank you for proving my point. It can be run on a cpu

“It’s slow, it’s inefficient” it still runs

It’s a foundational model just like R1 was.

[–] theunknownmuncher@lemmy.world 2 points 12 hours ago* (last edited 12 hours ago) (1 children)

Yes, you can run it at scale.

at scale

Shift those goalposts! We went from "at scale" to "it still runs"

[–] ag10n@lemmy.world 1 points 12 hours ago (1 children)

Quote me in full.

You can run it at scale, on huawei. You can also run it on a cpu

[–] theunknownmuncher@lemmy.world 0 points 12 hours ago* (last edited 12 hours ago) (2 children)

Quote me in full.

Okay!

You can run at scale, on huawei. You can also run it on a cpu

Yeah, that is absolutely not what you argued.

Anyway, you've conceded that I'm correct that you cannot run it at scale on a CPU, because running on CPU is too slow and inefficient, and that they instead use GPU hardware like Huawei GPUs to run the model at scale. That's good enough for me!

[–] Diurnambule@jlai.lu 1 points 3 hours ago

Okey, then priced to just screenshot the part after the initial argument. Dude do more efforts.

[–] ag10n@lemmy.world 1 points 11 hours ago

Your interpretation of the English language has won you an argument! Huzzah

So good of you to concede it runs on cpu