this post was submitted on 12 Apr 2026
75 points (75.9% liked)

Selfhosted

58565 readers
602 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Quick post about a change I made that's worked out well.

I was using OpenAI API for automations in n8n — email summaries, content drafts, that kind of thing. Was spending ~$40/month.

Switched everything to Ollama running locally. The migration was pretty straightforward since n8n just hits an HTTP endpoint. Changed the URL from api.openai.com to localhost:11434 and updated the request format.

For most tasks (summarization, classification, drafting) the local models are good enough. Complex reasoning is worse but I don't need that for automation workflows.

Hardware: i7 with 16GB RAM, running Llama 3 8B. Plenty fast for async tasks.

you are viewing a single comment's thread
view the rest of the comments
[–] TheMightyCat@ani.social 12 points 4 days ago* (last edited 4 days ago) (1 children)

Depending what OP was using before but going from something like GPT5.2 to LLama 3 8B will be a massive difference (Although OP says to use it only for basic tasks so that does offset it)

LLama 3 already being a very old model doesn't help either

I run Qwen3.5-35B-A3B-AWQ-4bit which while leagues ahead of LLama 3 8B still is a very noticeable difference.

This is not to say open source is bad, if one had the resources to run something like Qwen3.5-397B-A17B it would also be up there.

[–] Valmond@lemmy.dbzer0.com 2 points 4 days ago (3 children)

What kind of hardware do you need to run those models?

[–] TheMightyCat@ani.social 5 points 4 days ago* (last edited 4 days ago)

I'm running 2x4090, the 35B fits very comfortable in that.

For large models like the 397B without a ton of money there are several ways, ive seen posts of people using arrays of used 3090s with good results.

The other option is CPU inference although with current RAM prices that is less cost effective.

I was looking at maybe an array of Milk-V JUPITER2 since vllm added riscv support which could be very cost effective.

[–] Jakeroxs@sh.itjust.works 5 points 4 days ago

Depends on how much quantization, but still fairly beefy, couldn't run it on my homelab with a 3080ti for example.

I generally use smaller 8-12b models and they're alright depending on the task.

[–] suicidaleggroll@lemmy.world 4 points 4 days ago* (last edited 4 days ago)

In general, you take the model size in billions of parameters (eg: 397B), divide it by 2 and add a bit for overhead, and that’s how much RAM/VRAM it takes to run it at a “normal” quantization level. For Qwen3.5-397B, that’s about 220 GB. Ideally that would be all VRAM for speed, but you can offload some or all of that to normal RAM on the CPU, you’ll just take a speed hit.

So for something like Qwen3.5-397B, it takes a pretty serious system, especially if you’re trying to do it all in VRAM.