this post was submitted on 24 Sep 2025
154 points (94.8% liked)

Selfhosted

52310 readers
400 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

you are viewing a single comment's thread
view the rest of the comments
[–] 30p87@feddit.org 11 points 3 weeks ago (8 children)

That I've yet to see a containerization engine that actually makes things easier, especially once a service does fail or needs any amount of customization. I've two main services in docker, piped and webodm, both because I don't have the time (read: am too lazy) to write a PKGBUILD. Yet, docker steals more time than maintaining a PKGBUILD, with random crashes (undebuggable, as the docker command just hangs when I try to start one specific container), containers don't start properly after being updated/restarted by watchtower, and debugging any problem with piped is a chore, as logging in docker is the most random thing imagineable. With systemd, it's in journalctl, or in /var/log if explicitly specified or obviously useful (eg. in multi-host nginx setups). With docker, it could be a logfile on the host, on the guest, or stdout. Or nothing, because, why log after all, when everything "just works"? (Yes, that's a problem created by container maintainers, but one you can't escape using docker. Or rather, in the time you have, you could more easily properly(!) install it bare metal) Also, if you want to use unix sockets to more closely manage permissions and prevent roleplaying a DHCP and DNS server for ports (by remembering which ports are used by which of the 25 or so services), you'll either need to customize the container, or just use/write a PKGBUILD or similar for bare metal stuff.

Also, I need to host a python2.7 django 2.x or so webapp (yes, I'm rewriting it), which I do in a Debian 13 VM with Debian 9 and Debian 9 LTS repos, as it most closely resembles the original environment, and is the largest security risk in my setups, while being a public website. So into qemu it goes.

And, as I mentioned, either stuff is officially packaged by Arch, is in the AUR or I put it into the AUR.

[–] renegadespork@lemmy.jelliefrontier.net 8 points 3 weeks ago (2 children)

Do you host on more than one machine? Containerization / virtualization begins to shine most brightly when you need to scale / migrate across multiple servers. If you're only running one server, I definitely see how bare metal is more straight-forward.

[–] zod000@lemmy.dbzer0.com 3 points 3 weeks ago

This is a big part of why I don't use VMs or containers at home. All of those abstractions only start showing their worth once you scale them out.

[–] 30p87@feddit.org 2 points 3 weeks ago

One main server, with backup servers being very easy to get up and running, either by full-restoring the backup, or installing and restoring specific services. As everything's backed up to a Hetzner Storage Box, I can always restore it (if I have my USB sticks with the keyfiles).

I don't really see the need for multiple running hosts, apart from:

  • Router
  • Workstation which has a 1070 in it, if I need a GPU for something. My 1U server only has space for a low profile and one slot GPU/HPC processor, and one of those would cost way more than its value over my old 1070 would be.
load more comments (5 replies)