this post was submitted on 12 Apr 2026
75 points (75.9% liked)

Selfhosted

58565 readers
602 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Quick post about a change I made that's worked out well.

I was using OpenAI API for automations in n8n — email summaries, content drafts, that kind of thing. Was spending ~$40/month.

Switched everything to Ollama running locally. The migration was pretty straightforward since n8n just hits an HTTP endpoint. Changed the URL from api.openai.com to localhost:11434 and updated the request format.

For most tasks (summarization, classification, drafting) the local models are good enough. Complex reasoning is worse but I don't need that for automation workflows.

Hardware: i7 with 16GB RAM, running Llama 3 8B. Plenty fast for async tasks.

you are viewing a single comment's thread
view the rest of the comments
[–] HK65@sopuli.xyz 11 points 4 days ago (1 children)

I actually did an experiment on doing just that. For context, I'm an experienced software engineer, whose company buys him a tom of Claude usage so I had time to test out what it can actually do and I feel like I'm capable of judging where it's good and where it falls short at.

How Claude Code works is that there are actually multiple models involved, one for doign the coding, one "reasoning" model to keep the chain of thought and the context going, and a bunch of small specialized ones for odd jobs around the thing.

The thing that doesn't work yet is that the big reasoning model has to still be big, otherwise it will hallucinate frequently enough to break the workflow. If you could get one of the big models to run locally, you'd be there. However, with recent advances in quantization and MoE models, it's actually getting nearer fast enough that I would expect it to be generally available in a year or two.

Today the best I could do was a tool that could take 150 gigs of RAM, 24 gigs of VRAM and AMD's top of the line card to take 30 minutes what takes Claude Code 1-2. But surprisingly, the output of the model was not bad at all.

[–] sobchak@programming.dev 1 points 3 days ago

You really only need a little more RAM than your GPU's VRAM (unless you're doing CPU offloading, which is extremely slow). Otherwise, I did the same thing recently too, and was surprised I was able to get it a Qwen 9B to fix a bug in a script I had. I think Sonnet would've fixed in a lot fewer tries, but the 9B model was eventually able to fix it. I could've fixed it myself quicker and cleaner than both, but it was an interesting test.