this post was submitted on 10 Jan 2024
77 points (86.7% liked)

Selfhosted

40296 readers
311 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi! Question in the title.

I get that its super easy to setup. But its really worthwhile to have something that:

  • runs everything as root (not many well built images with proper useranagement it seems)
  • you cannot really know which stuff is in the images: you must trust who built it
  • lots of mess in the system (mounts, fake networks, rules...)

I always host on bare metal when I can, but sometimes (immich, I look at you!) Seems almost impossible.

I get docker in a work environment, but on self hosted? Is it really worth while? I would like to hear your opinions fellow hosters.

top 50 comments
sorted by: hot top controversial new old
[–] beta_tester@lemmy.ml 48 points 10 months ago (25 children)
  • Podman solves the root issue
  • you can inspect the stuff. You don't have to, but it helps if you're not paranoid with popular and widespread images
  • I have no mess

It's great that you do install things on bare metal, I did that in the beginning until I discovered docker and I will never go back. Docker/ podman compose is just so good

[–] redcalcium@lemmy.institute 27 points 10 months ago (1 children)

you can inspect the stuff. You don't have to, but it helps if you're not paranoid with popular and widespread images

Dive is a great tool for inspecting docker images. I wish I found it sooner.

[–] droolio@feddit.uk 5 points 10 months ago

Thank you for posting this, hadn't heard of it before.

[–] Molecular0079@lemmy.world 3 points 10 months ago (2 children)

Out of curiosity, what reverse proxy docker do you use that can run rootless in podman? My main issue, and feel free to correct me if I am wrong, is that most of them require root. And then its not possible to easily connect those containers into the same network as your rootless containers so then your other containers have to be root anyways. I don't really want my other containers to be host accessible, I want them to be only accessible from within the podman network that the reverse proxy has access to.

And then there's issues where you have to enable lingering processes for normal users and also let it access ports < 1024, makes using docker-compose a pain, etc. I haven't really found a good solution for rootless, but I really want to eventually move that way.

[–] beta_tester@lemmy.ml 2 points 10 months ago* (last edited 10 months ago)

Tbo, I've got a pi that runs only the reverse proxy and it works and I don't touch it until it breaks. It's still docker. nginx proxy manager

load more comments (23 replies)
[–] shalva97@lemmy.world 39 points 10 months ago (3 children)

Life is too short to install everything on baremetal.

load more comments (3 replies)
[–] umbrella@lemmy.ml 36 points 10 months ago (1 children)

people are rebuffing the criticism already.

heres the main advantage imo:

no messy system or leftovers. some programs use directories all over the place and it gets annoying fast if you host many services. sometimes you will have some issue that requires you to do quite a bit of hunting and redoing things.

docker makes this painless. you can deploy and redeploy stuff easily and quickly, without a mess. updates are painless and quick too, with everything neatly self-contained.

much easier to maintain once you get the hang of things.

[–] million@lemmy.world 2 points 10 months ago

Quick addition, I think for the messy argument the way I would articulate it for folks running servers is it helps you move from pets to cattle.

[–] Moonrise2473@feddit.it 32 points 10 months ago (1 children)

About the root problem, as of now new installs are trying to let the user to run everything as a limited user. And the program is ran as root inside the container so in order to escape from it the attacker would need a double zero day exploit (one for doing rce in the container, one to escape the container)

The alternative to "don't really know what's in the image" usually is: "just download this Easy minified and incomprehensible trustmeimtotallynotavirus.sh script and run it as root". Requires much more trust than a container that you can delete with no traces in literally seconds

If the program that you want to run requires python modules or node modules then it will make much more mess on the system than a container.

Downgrading to a previous version (or a beta preview) of the app you're running due to bugs it's trivial, you just change a tag and launch it again. Doing this on bare metal requires to be a terminal guru

Finally, migrating to a new fresh server is just docker compose down, then rsync to new server, and then docker compose up -d. And not praying to ten different gods because after three years you forgot how did you install the app in bare metal like that.

Docker is perfect for common people like us self hosting at home, the professionals at work use kubernetes

load more comments (1 replies)
[–] haui_lemmy@lemmy.giftedmc.com 29 points 10 months ago* (last edited 10 months ago) (3 children)

Imo, yes.

  • only run containers from trusted sources (btw. google, ms, apple have proven they cant be trusted either)
  • run apps without dependency hell
  • even if someone breaks in, they’re not in your system but in a container
  • have everything web facing separate from the rest
  • get per app resource statistics

Those are just what was in my head. Probably more to be said.

[–] invertedspear@lemm.ee 6 points 10 months ago (1 children)

Also the ability to snapshot an image, goof around with changes, and if you don’t like them restore the snapshot makes it much easier to experiment than trying to unwind all the changes you make.

load more comments (1 replies)
load more comments (2 replies)
[–] peter@feddit.uk 20 points 10 months ago

Docker is a messy and not ideal but it was born out of a necessity, getting multiple services to coexist together outside of a container can be a nightmare, updating and moving configuration is a nightmare and removing things can leave stuff behind which gets messier and messier over time. Docker just standardises most of the configuration whilst requiring minimal effort from the developer

[–] Hexarei@programming.dev 19 points 10 months ago (1 children)

Others have addressed the root and trust questions, so I thought I'd mention the "mess" question:

Even the messiest bowl of ravioli is easier to untangle than a bowl of spaghetti.

The mounts/networks/rules and such aren't "mess", they are isolation. They're commoditization. They're abstraction - Ways to tell whatever is running in the container what it wants to hear, so that you can treat the container as a "black box" that solves the problem you want solved.

Think of Docker containers less like pets and more like cattle, and it very quickly justifies a lot of that stuff because it makes the container disposable, even if the data it's handling isn't.

load more comments (1 replies)
[–] MartianSands@sh.itjust.works 19 points 10 months ago (1 children)

I find it makes my life easier, personally, because I can set up and tear down environments I'm playing with easily.

As for your user & permissions concern, are you aware that docker these days can be configured to map "root" in the container to a different user? Personally I prefer to use podman though, which doesn't have that problem to begin with

[–] micka190@lemmy.world 2 points 10 months ago* (last edited 10 months ago) (1 children)

I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.

Same here. I self-host a bunch of dev tools for my personal toy projects, and I decided to migrate from Drone CI to Woodpecker CI this week. Didn't have to worry about uninstalling anything, learning what commands I need to start/stop/restart Woodpecker properly, etc. I just commented-out my Drone CI/Runner services from my docker-compose file, added the Woodpecker stuff, pointed it to my Gitea variables and ran docker compose up -d.

If my server ever crashes, I can just copy it over and start from scratch.

load more comments (1 replies)
[–] ssdfsdf3488sd@lemmy.world 18 points 10 months ago

Because if you use relative bind mounts you can move a whole docker compose set of contaibera to a new host with docker compose stop then rsync it over then docker compose up -d.

Portability and backup are dead simple.

[–] MigratingtoLemmy@lemmy.world 11 points 10 months ago

Docker can be run rootless. Podman is rootless by default.

I build certain containers from scratch. Very popular FOSS software can be trusted, but if you're as paranoid, you should probably run the bare-minimum software in the first-place.

It's a mess if you're not used to it. But yes, normal unix networking is somewhat simpler (like someone mentioned, LXC containers can be a decent idea). Well, you'll realise that Docker is not really top-dog in terms of complexity when you start playing with the big boys like full-fledged k8s

[–] DeltaTangoLima@reddrefuge.com 10 points 10 months ago (2 children)

To answer each question:

  • You can run rootless containers but, importantly, you don't need to run Docker as root. Should the unthinkable happen, and someone "breaks out" of docker jail, they'll only be running in the context of the user running the docker daemon on the physical host.
  • True but, in my experience, most docker images are open source and have git repos - you can freely download the repo, inspect the build files, and build your own. I do this for some images I feel I want 100% control of, and have my own local Docker repo server to hold them.
  • It's the opposite - you don't really need to care about docker networks, unless you have an explicit need to contain a given container's traffic to it's own local net, and bind mounts are just maps to physical folders/files on the host system, with the added benefit of mounting read-only where required.

I run containers on top of containers - Proxmox cluster, with a Linux container (CT) for each service. Most of those CTs are simply a Debian image I've created, running Docker and a couple of other bits. The services then sit inside Docker (usually) on each CT.

It's not messy at all. I use Portainer to manage all my Docker services, and Proxmox to manage the hosts themselves.

Why? I like to play.

Proxmox gives me full separation of each service - each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

Let's say there's a new contender that competes with Immich. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT).

I can spin up a Proxmox CT from my own template, use my Ansible playbook to provision Docker and all the other bits, load it in my Portainer management platform, and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

I have a play with the competitor for a bit. If I don't like it, I just delete the CT and move on. If I do, I can point my photos... hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don't like about the new kid on the block.

load more comments (2 replies)
[–] possiblylinux127@lemmy.zip 9 points 10 months ago

Well docker tends to be more secure if you configure it right. As far as images go it really is just a matter of getting your images from official sources. If there isn't a image already available you can make one.

The big advantage to containers is that they are highly reproducible. You no longer need to worry about issues that arise when running on the host directly.

Also if you are looking for a container runtime that runs as a local user you should check out podman. Podman works very similarly to docker and can even run your containers as a systemd user service.

[–] oranki@sopuli.xyz 7 points 10 months ago

Portability is the key for me, because I tend to switch things around a lot. Containers generally isolate the persistent data from the runtime really well.

Docker is not the only, or even the best way IMO to run containers. If I was providing services for customers, I would definetly build most container images daily in some automated way. Well, I do it already for quite a few.

The mess is only a mess if you don't really understand what you're doing, same goes for traditional services.

[–] aleq@lemmy.world 5 points 10 months ago

the biggest selling point for me is that I'll have a mounted folder or two, a shell script for creating the container, and then if I want to move the service to a new computer I just move these files/folders and run the script. it's awesome. the initial setup is also a lot easier because all dependencies and stuff are bundled with the app.

in short, it's basically the exe-file of the server world

runs everything as root (not many well built images with proper useranagement it seems)

that's true I guess, but for the most part shit's stuck inside the container anyway so how much does it really matter?

you cannot really know which stuff is in the images: you must trust who built it

you kinda can, reading a Dockerfile is pretty much like reading a very basic shell script for the most part. regardless, I do trust most creators of images I use. most of the images I have running are either created by the people who made the app, or official docker images. if I trust them enough to run their apps, why wouldn't I trust their images?

lots of mess in the system (mounts, fake networks, rules...)

that's sort of the point, isn't it? stuff is isolated

[–] Semi-Hemi-Demigod@kbin.social 5 points 10 months ago
  1. I don't run any of my containers as root
  2. Dockerfiles aren't hard to read so you can pretty easily figure out what they're doing
  3. I find managing dependencies for non-containerized services to be worse than one messy docker directory I never look at

Plus having all my services in a couple docker-compose files also means I can move them around incredibly easily.

[–] specseaweed@lemmy.world 4 points 10 months ago

I know enough to be dangerous. I know enough to follow faqs but dumb enough to not backup like I should.

So I’d be running my server on bare metal and have a couple services going and sooner or later, shit would get borked. Shit that was miles past my competence to fix. Sometimes I’d set up a DB wrong, or break it, or an update would screw it up, and then it would all fall apart and I’m there cursing and wiping and starting all over.

Docker fixes that completely. It’s not perfect, but it has drastically lowered my time working on my server.

My server used to be a hobby that I loved dumping hours into. Now, I just want shit to work.

[–] msage@programming.dev 4 points 10 months ago

I have VMs on my metal, one specific for containers.

Though I use LXC. Docker started with LXC, then grew bigger, and I don't like how big it is.

If I can set up one simple NAT and run everything inside a container, I don't need Docker.

Docker's main advantage is the hub.

[–] avidamoeba@lemmy.ca 3 points 10 months ago

In short, yes, yes it's worth it.

[–] eluvatar@programming.dev 3 points 10 months ago

About the trust issue. There's no more or less trust than running on bare metal. Sure you could compile everything from source but you probably won't, and you might trust your distro package manager, but that still has a similar problem.

[–] bluGill@kbin.social 3 points 10 months ago

Docker gives you a few different things which might or might not matter. Note that all of the following can be gotten in ways other than docker as well. Sometimes those ways are better, but often what is better is just opinion. There are downsides to some of the following as well that may not be obvious.

With docker you can take a container and roll it out to 100s of different machines quickly. this is great for scaling if your application can scale that way.

With docker you can run two services on the same machine that use incompatible versions of some library. It isn't unheard of to try to upgrade your system and discover something you need isn't compatible with the new library, while something else you need to upgrade needs the new library. Docker means each service gets separate copies of what is needs and when you upgrade one you can leave the other behind.

With docker you can test an upgrade and then when you roll it out know you are rolling out the same thing everywhere.

With docker you can move a service from one machine to a different one somewhat easily if needed. Either to save money on servers, or to use more as more power is needed. Since the service itself is in a docker you can just start the container elsewhere and change pointers.

With docker if someone does manage to break into a container they probably cannot break into other containers running on the same system. (if this is a worry you need to do more risk assessment, they can still do plenty of damage)

[–] knobbysideup@sh.itjust.works 3 points 10 months ago

I concur with most of your points. Docker is a nice thing for some use cases, but if I can easily use a package or set up my own configurations, then I will do that instead of use a docker container every time. My main issues with docker:

  • Containers are not updated with the rest of the host OS
  • firewall and mounting complexities which make securing it more difficult
[–] vzq@lemmy.blahaj.zone 2 points 10 months ago* (last edited 10 months ago) (6 children)

How is this meaningfully different than using Deb packages? Or building from source without inspecting the build commands? Or even just building from source without auditing the source?

In the end docker files are just instructions for running software to set up other software. Just like every other single shell script or config file in existence since the mid seventies.

load more comments (6 replies)
[–] Gooey0210@sh.itjust.works 2 points 10 months ago* (last edited 10 months ago)

Check out Nixos, this is like the next step of docker

Ah, and a side note: docker is not fully open source

[–] scrubbles@poptalk.scrubbles.tech 2 points 10 months ago

I'll answer your question of why with your own frustration - bare metal is difficult. Every engineer uses a different language/framework/dependencies/whathaveyou and usually they'll conflict with others. Docker solves this be containing those apps in their own space. Their code, projects, dependencies are already installed and taken care of, you don't need to worry about it.

Take yourself out of homelab and put yourself into a sysadmin. Now instead of knowing how packages may conflict with others, or if updating this OS will break applications, you just need to know docker. If you know docker, you can run any docker app.

So, yes, volumes and environments are a bit difficult at first. But it's difficult because it is a standard. Every docker container is going to need a couple mounts, a couple variables, a port or two open, and if you're going crazy maybe a GPU. It doesn't matter if you're running 1 or 50 containers on a system, you aren't going to get conflicts.

As for the security concerns, they are indeed security concerns. Again imagine you're a sysadmin - you could direct developers that they can't use root, that they need to be built on OS's with the latest patches. But you're at home, so you're at the mercy of whoever built the image.

Now that being said, since you're at their mercy, their code isn't going to get much safer whether you run it bare-iron or containerized. So, do you want to spend hours for each app figuring out how to run it, or spend a few hours now to learn docker and then have it standardized?

load more comments
view more: next ›