this post was submitted on 25 Apr 2026
260 points (97.4% liked)
Technology
84103 readers
2617 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I got Qwen 3.5 running on a Steam Deck.
It ain't exactly blazing fast, but it does actually work.
(Reasonably fast if you go down to the 2B param model, I can get the 9B param variant working, though this makes Steam Decky very hot and bothered.)
Yeah, you absolutely do not need Nvidia hardware to run an LLM, but we get blasted with their propoganda suggesting otherwise just all the time in the English speaking West.
Because if you don't need Nvidia, well, then, this whole AI bubble looks a lot more bubbly.
Take good care of your hw! It's not like 2 years ago when you could buy stuff off the shelf for reasonable prices. :D
My Steam Deck is my child.
Maybe if I can get it to run a 'good enough' LLM, and also a robotics kinematics suite...
I can just start building DOG, with a Steam Deck for a face, instead of a Combine scanner bot.
Amd have the best consumer grafic card to run llm on the market.
Sorry, I'm not entirely sure what you mean.
Did you mean to say:
"And need to have the best consumer GPU on the market, to run an LLM."
... likely alluding to an RTX 5090?
So you would be saying that basically it is bullshit, the idea that everyone needs extremely expensive hardware, to run an LLM?
Hello, no sorry auto correction and going fast do it to my posts. I wanted to say that NVIDIA is already the worst option for consumer graphic card since AMD made a card with 20go ram which is able to run most open weight models.
Aha! Ok, that makes sense as well.