Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Wow! very cool rack you got there. I too started using mini pcs for local test servers or general home servers. But unlike yours mine are just dumped behind the screen on my desk (3 in total). For LLM stuff atm I use 16GB radeon but thats connected to my desktop. In the future I would love to build a proper rack like yours and perhaps move the GPU to a dedicated minipc.
As for the upgrades, like what others stated already, I would just go for more pc's rather then rpi.
The PIs were honestly because I had them.
I think I'd rather use them for something else like robotics or a Birdnet pi.
But the pi rack was like $20 and hilarious.
The objectively correct answer for more compute is more mini PCs though. And I'm really thinking about the Mac Mini option for AI.
is the mac mini really that good? running 12-14b models on my radeon rx 7600xt is ok'ish but i do "feel it" while running 7-8b models sometimes just doesn't feel enough. I wonder where does mac mini land in here.
From what I understand its not as fast as a consumer Nvdia card but but close.
And you can have much more "Vram" because they do unified memory. I think the max is 75% of total system memory goes to the GPU. So a top spec Mac mini M4 Pro with 48GB of Ram would have 32gb dedicated to GPU/NPU tasks for $2000
Compare that to JUST a 5090 32GB for $2000 MSRP and its pretty compelling.
$200 and its the 64GB model with 2x 4090's amounts of Vram.
Its certainly better than the AMD AI experience and its the best price for getting into AI stuff so says nerds with more money and experience than me.
From what I understand its not as fast as a consumer Nvdia card but but close.
And you can have much more "Vram" because they do unified memory. I think the max is 75% of total system memory goes to the GPU. So a top spec Mac mini M4 Pro with 48GB of Ram would have 32gb dedicated to GPU/NPU tasks for $2000
Compare that to JUST a 5090 32GB for $2000 MSRP and its pretty compelling.
$200 and its the 64GB model with 2x 4090's amounts of Vram.
Its certainly better than the AMD AI experience and its the best price for getting into AI stuff so says nerds with more money and experience than me.
Interesting. Is there a non-apple solution like this?