this post was submitted on 11 Apr 2026
66 points (83.0% liked)

Selfhosted

59179 readers
402 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hey :) For a while now I use gpt-oss-20b on my home lab for lightweight coding tasks and some automation. I'm not so up to date with the current self-hosted LLMs and since the model I'm using was released at the beginning of August 2025 (From an LLM development perspective, it feels like an eternity to me) I just wanted to use the collective wisdom of lemmy to maybe replace my model with something better out there.

Edit:

Specs:

GPU: RTX 3060 (12GB vRAM)

RAM: 64 GB

gpt-oss-20b does not fit into the vRAM completely but it partially offloaded and is reasonably fast (enough for me)

all 31 comments
sorted by: hot top controversial new old
[–] Jozzo@lemmy.world 35 points 1 month ago (1 children)

I find Qwen3.5 is the best at toolcalling and agent use, otherwise Gemma4 is a very solid all-rounder and it should be the first you try. Tbh gpt-oss is still good to this day, are you running into any problems w it?

[–] tanka@lemmy.ml 7 points 1 month ago (1 children)

No problems per se. I just thought that I had not checked for an update for a longer time.

[–] jacksilver@lemmy.world 2 points 1 month ago

You're probably aware, but updating the model periodically is probably a good idea just because things do change overtime.

A model from two years ago was trained on data from at least two years ago. Meaning any technology, code, world event changes wouldn't be reflected in the model.

[–] ejs@piefed.social 21 points 1 month ago

I suggest looking at llm arena leaderboards filtered by open weight models. It offers benchmarks at a very complete and statistically detailed level for models, and usually is quite up to date when new models come out. The new Gemma that just came out might be the best for 1x GPU, and if you have a bunch of vram check out the larger Chinese models

[–] Gumus@lemmy.dbzer0.com 14 points 1 month ago

I'd say Qwen 3.5 and Gemma 4 beat GPT OSS in every aspect.

[–] iceberg314@slrpnk.net 11 points 1 month ago (1 children)

I also recommend gemma4 or qwen3.5. Both super solid in my experience for how lightweight they are

[–] NoFun4You@lemmy.world 2 points 1 month ago (1 children)

Still can't get my gemma to give me complete unbuggy components

[–] iceberg314@slrpnk.net 1 points 1 month ago

I guess I have been using gemma4 fro more role playing games. Qwen3.5 seems to be better coder actually

[–] tal@lemmy.today 9 points 1 month ago

I'm not on there, but you might have more luck in !localllama@sh.itjust.works

You might also want to list the hardware that you plan to use, since that'll constrain what you can reasonably run.

[–] cron@feddit.org 6 points 1 month ago

The latest open weights model from google might be a good fit for you. The 26B model works pretty well on my machine, though the performance isn't great (6 tokens per second, CPU only).

[–] zorflieg@lemmy.world 5 points 1 month ago* (last edited 1 month ago)

Gemma4 e4b quant8 will fit in 12gb and is good

[–] jaschen306@sh.itjust.works 4 points 1 month ago

I'm running gemma4 26b MOE for most of my agent calls. I use glm5:cloud for my development agent because 26b struggles when the context windows gets too big.

[–] DieserTypMatthias@lemmy.ml 3 points 1 month ago

Qwen is pretty good. Also try LFM models.

[–] carzian@lemmy.ml 3 points 1 month ago

I'm in the same boat. You'll get better responses if you post your machine specs. I

[–] Evotech@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

I’d use some Chinese model. Qwen3.5 Claude 4.6 distilled ablitirated is what I use

[–] sompreno@lemmy.zip 2 points 1 month ago (1 children)

What are your computer specs?

[–] tanka@lemmy.ml 2 points 1 month ago (1 children)

I did just update my post with the specs. Maybe it takes a while to federate?

[–] sompreno@lemmy.zip 1 points 1 month ago

I must have not refreshed ignore my comment

[–] nutbutter@discuss.tchncs.de 2 points 1 month ago

Have you tried the new gemma4 models? The e4b fits in the 12gb memory and is pretty good. Or you can use 31b too, if you're okay with offloading to CPU.

[–] Kirk@startrek.website 2 points 1 month ago (3 children)

Just curious, what does "some automation" entail? I thought LLMs could only work with text, like summarize documents and that sort of thing.

[–] Jozzo@lemmy.world 6 points 1 month ago (1 children)

It's done by software using an LLM, not just a raw LLM. They do only work with text, but you can get it to output the text "get_weather(mylocation)", and instead of just outputting that directly to the user, the software running on top of the LLM runs a " get_weather" function that calls some weather API. The result of that function is then output to the user.

Any time you see an "AI" taking "actions", this is what happens in the background for every action.

[–] a1studmuffin@aussie.zone 3 points 1 month ago (1 children)

These days they can also chain together tools, keep a working memory etc. Look at Claude Code if you're curious. It's come very far very quickly in the last 12 months.

[–] Kirk@startrek.website 1 points 1 month ago

OP said coding AND "some automation", what is being automated?

[–] theunknownmuncher@lemmy.world 1 points 1 month ago

How much VRAM?