this post was submitted on 25 Apr 2026
270 points (97.5% liked)
Technology
84103 readers
2542 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You've proved my point that you don't know what you're talking about by blindly linking to the git repo. Couldn't find any source that supports your claim? I wonder why.
Sure you can serve one request at a time to one patient user at a slow token per second rate, which makes running locally viable, but there is no RAM that has the bandwidth to run this model at scale. Even flash would be incredibly slow on CPU with multiple requests. You'd need the high bandwidth of VRAM and to run across multiple GPUs in a scalable way, it requires extremely high bandwidth interconnects between GPUs.
Thank you for proving my point. It can be run on a cpu
“It’s slow, it’s inefficient” it still runs
It’s a foundational model just like R1 was.
Shift those goalposts! We went from "at scale" to "it still runs"
Quote me in full.
You can run it at scale, on huawei. You can also run it on a cpu
Okay!
Yeah, that is absolutely not what you argued.
Anyway, you've conceded that I'm correct that you cannot run it at scale on a CPU, because running on CPU is too slow and inefficient, and that they instead use GPU hardware like Huawei GPUs to run the model at scale. That's good enough for me!
Okey, then priced to just screenshot the part after the initial argument. Dude do more efforts.
Your interpretation of the English language has won you an argument! Huzzah
So good of you to concede it runs on cpu