Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Running a thousand watts and not running a thousand watts can be quiet a difference depending on where you live. And then consider buying all of the hardware. In many cases it's probably cheaper to just pay $40 al month.
Do you think it runs at 1000w continuously? On any decent GPU, the responses are nearly instantaneous to maybe a few seconds of runtime at maybe max GPU consumption.
Compare that to playing a few hours of cyberpunk 2077 with raytracing and maxed out settings at 4k.
Don't get me wrong, there's a lot to hate about AI/LLMs, but running one locally without data harvesting engines is pretty minimal. The creation of the larger models is where the consumption primarily comes in, and then the data centers that run them are servicing millions of inquiries a minute making the concentration of consumption at a single point significantly higher (plus they retrain the model there on current and user-fed data, including prompts, whereas your computer hosting ollama would not.)
What whack ass setup so you think OP has? Dual 5090s? They’re running it on an i7.
It's also an 8 gigaparameter model. That's pretty tiny, even if they use it heaps.
That would be true worst case, but you're never running inference 24/7. It's no crazier than gaming in that regard.