this post was submitted on 24 Sep 2025
153 points (94.7% liked)

Selfhosted

51854 readers
588 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 4 points 5 days ago* (last edited 5 days ago) (1 children)

In my case it’s performance and sheer RAM need.

GLM 4.5 needs like 112GB RAM and absolutely every megabyte of VRAM from the GPU, at least without the quantization getting too compressed to use. I’m already swapping a tiny bit and simply cannot afford the overhead.

I think containers may slow down CPU<->GPU transfers slightly, but don’t quote me on that.

[–] kiol@lemmy.world 1 points 5 days ago (1 children)

Can anyone confirm if containers would actually impact CPU to GPU transfers

[–] brucethemoose@lemmy.world 2 points 5 days ago* (last edited 5 days ago)

To be clear, VMs absolutely have overhead but Docker/Podman is the question. It might be negligible.

And this is a particularly weird scenario (since prompt processing literally has to shuffle ~112GB over the PCIe bus for each batch). Most GPGPU apps aren’t so sensitive to transfer speed/latency.