this post was submitted on 25 Apr 2026
25 points (80.5% liked)

Selfhosted

58781 readers
309 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm looking to build a low-end ollama LLM server to improve home assistant voice control, Immich image recognition and a few other services. With the current cost of hardware components like memory, I'm looking to build something small, but somewhat expandable.

I have an old micro-atx form factor computer that I'm thinking will be a good option to upgrade. I'd love recommendations on motherboards, processors, and video card combos that would likely be compatible and sufficient to run a decent server while keeping costs lower, basically, the best bang for the buck. I have a couple of M.2 SSDs I can re-purpose. Would prefer the motherboard has 2.5Gbit Ethernet, but otherwise I'm open.

Also recommendations on sites to purchase good quality memory at reasonable prices that ship to the US. I'd be willing to look at lightly used components, too.

Any advice on any of these topics would be greatly appreciated. The advice I've found has all been out of date especially with crypto fading so video cards are not as expensive, but LLM data centers eating up and reserving memory before it's even manufactured.

you are viewing a single comment's thread
view the rest of the comments
[–] mierdabird@lemmy.dbzer0.com 2 points 13 hours ago

It's hard to say what exactly your requirements are in terms of VRAM/RAM from what you described here, but as a general recommendation whether AMD or Intel, I'd stick with DDR4 generation hardware. DDR5 is extremely expensive, but any non-MoE model that spills into system memory will still be frustratingly slow.

For GPU's the best bang for your buck if you want Nvidia is probably the 3060 12GB, it has 360GB/s memory bandwidth and one or more of those is a very reasonable starting point for local AI.
If you're okay with AMD there are some really unique cards floating around, I recently picked up a V620 off ebay for $350, it's an ex-datacenter card with 32GB GDDR6 @ 512GB/s bandwidth. It's a bit of a power hog but in my early testing it was running Qwen coder 3 30B at like 100 tokens/sec.

I run it on an ASUS X570 PRO board which is the cheapest AM4 board I could find with an optimal PCI-E setup: three x16 slots running 4.0x8, 4.0x8, 3.0x4. I have successfully tested it with the V620, a 9060XT, and a 3060 for 60 GB total VRAM, though the third x16 is only single slot so I had to borrow a pci extender cable to try it. I've found 48gb VRAM is plenty for me so I doubt I'll actually run a third card unless I find a good deal on a single slot one.

Kinda turned into a ramble but let me know if you got questions