I've always done things bare metal since starting the selfhosting stuff before containers were common. I've recently switched to NixOS on my server, which also solves the dependency hell issue that containers are supposed to solve.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
I use Raspberry Pi 4 with 16GB SD-card. I simply don't have enough memory and CPU power for 15 separate database containers for every service which I want to use.
Erm. I'd just say there's no benefit in adding layers just for the sake of it.
It's just different needs. Say I have a machine where I run a dedicated database on, I'd install it just like that because as said there's no advantage in making it more complicated.
Are you concerned about your self-hosted bare metal machine being a single point of failure? Or, are you concerned it will be difficult to reproduce?
Considering I have a full backup, all services are Arch packages and all important data is on its own drive, I'm not concerned about anything
I've not cracked the docker nut yet. I don't get how I backup my containers and their data. I would also need to transfer my Plex database into its container while switching from windows to Linux, I love Linux but haven't figured out these two things yet
Anything you want to back up (data directories, media directories, db data) you would use a bind mount for to a directory on the host. Then you can back them up just like everything else on the host.
All your docker data can be saved to a mapped local disk, then backup is the same as it ever is. Throw borg or something on it and you're gold.
Look into docker compose and volumes to get an idea of where to start.
You would leave your plex config and db files on the disk and then map them into the docker container via the volume parameter (-v parameter if you are running command line and not docker-compose). Same goes for any other docker container where you want to persist data on the drive.
I use k3s and enjoy benefits like the following over bare metal:
- Configuration as code where my whole setup is version controlled in git
- Containers and avoiding dependency hell
- Built-in reverse proxy with the Traefik ingress controller. Combined with DNS in my OpenWRT router, all of my self hosted apps can be accessed via appname.lan (e.g., jellyfin.lan, forgejo.lan)
- Declarative network policies with Calico, mainly to make sure nothing phones home
- Managing secrets securely in git with Bitnami Sealed Secrets
- Liveness probes that automatically “turn it off and on again” when something goes wrong
These are just some of the benefits just for one server. Add more and the benefits increase.
Edit:
Sorry, I realize this post is asking why go bare metal, not why k3s and containers are great. 😬
All I have is Minecraft and a discord bot so I don't think it justifies vms
It depends on the service and the desired level of it stack.
I generally will run services directly on things like a raspberry pi because VMs and containers offer added complexity that isn't really suitable for the task.
At work, I run services in docker in VMs because the benefits far outweigh the complexity.
I run my NAS and Home Assistant on bare metal.
- NAS: OMV on a Mac mini with a separate drive case
- Home Assistant: HAOS on a Lenovo M710q, since 1) it has a USB zigbee adapter and 2) HAOS on bare metal is more flexible
Both of those are much easier to manage on bare metal. Everything else runs virtualized on my Proxmox cluster, whether it's Docker stacks on a dedicated VM, an application that I want to run separately in an LXC, or something heavier in its own VM.
Mainly that I don't understand how to use containers... or VMs that well... I have and old MyCloud NAS and little pucky PC that I wanted to run simple QoL services on... HomeAssistant, JellyFin etc...
I got Proxmox installed on it, I can access it.... I don't know what the fuck I'm doing... There was a website that allowed you to just run scripts on shell to install a lot of things... but now none of those work becuase it says my version of Proxmox is wrong (when it's not?)... so those don't work....
And at least VMs are easy(ish) to understand. Fake computer with OS... easy. I've built PCs before, I get it..... Containers just never want to work, or I don't understand wtf to do to make them work.
I wanted to run a Zulip or Rocket.chat for internal messaging around the house (wife and I both work at home, kid does home/virtualschool).... wanted to use a container because a service that simple doesn't feel like it needs a whole VM..... but it won't work...
In my case it’s performance and sheer RAM need.
GLM 4.5 needs like 112GB RAM and absolutely every megabyte of VRAM from the GPU, at least without the quantization getting too compressed to use. I’m already swapping a tiny bit and simply cannot afford the overhead.
I think containers may slow down CPU<->GPU transfers slightly, but don’t quote me on that.
There’s one thing I’m hosting on bare metal, a WebDAV server. I’m running it on the host because it uses PAM for authentication, and that doesn’t work in a container.
Anything you want dedicated performance on or require fine tuning for a specific performance use cases. Theyre out there.
What are you doing running your vms on bare metal? Time is a flat circle.
I'm running Kube on baremetal.
My file server is also the container/VM host. It does NAS duties while containers/VMs do the other services.
OPNsense is its own box because I prefer to separate it for security reasons.
Pihole is on its own RPi because that was easier to setup. I might move that functionality to the AdGuard plugin on OPNsense.
Obviously, you host your own hypervisor on own or rented bare metal.