moonpiedumplings

joined 2 years ago
[–] moonpiedumplings@programming.dev 3 points 3 weeks ago (1 children)

I don't really understand why this is a concern with docker. Are there any particular features you want from version 29 that version 26 doesn't offer?

The entire point of docker is that it doesn't really matter what version of docker you have, the containers can still run.

Debian's version of docker receives security updates in a timely manner, which should be enough.

I recommend libvirt + virt-manager as an alternative to hyper v.

The cool thing about virt manager is you can do it over ssh.

[–] moonpiedumplings@programming.dev 8 points 3 weeks ago (4 children)

You are adding a new repo, but you should know that the debian repos already contain docker (via docker.io) and docker-compose.

[–] moonpiedumplings@programming.dev 4 points 3 weeks ago (1 children)

I use authentik, which emables single sign on (the same account) between services.

Authentik is a bit complex and irritating at times, so I would recommend voidauth or kanidm as alternatives for most self hosters.

[–] moonpiedumplings@programming.dev 1 points 3 weeks ago* (last edited 3 weeks ago)

Would you use the cli?

One of the cool things I liked about calibre is that extensions worked via the cli interface as well, which made it easy to do batch workflows of operations on ebooks.

No, they added a beta vpn feature.

[–] moonpiedumplings@programming.dev 1 points 3 weeks ago (1 children)

Does it require docker installed and being in the docker group, with the docker daemon running?

Just an FYI, having the ability to create containers and do other docker is equivalent to root: https://docs.docker.com/engine/security/#docker-daemon-attack-surface

It's not really accurate to say that your playbooks don't require root to run when they basically do.

[–] moonpiedumplings@programming.dev 5 points 1 month ago* (last edited 1 month ago)

Yeah. I'm seeing a lot a it in this thread tbh. People are stylizing themselves to be IT admins or cybersec people rather than just hobbyists. Of course, maybe they do do it professionally as well, but I'm seeing an assumption from some people in this thread that its dangerous to self host even if you don't expose anything, or they are assuming that self hosting implies exposing stuff to the internet.

Tailscale in to you machine, and then be done with it, and otherwise only have access to it via local network or VPN.

Now, about actually keeping the services secure, further than just having them on a private subnet and then not really worrying about them. To be explicit, this is referring to fully/partially exposed setups (like VPN access to a significant number of people).

There are two big problems IMO: Default credentials, and a lack of automatic updates.

Default credentials are pretty easy to handle. Docker compose yaml files will put the credentials right there. Just read them and change them. It should be noted that you still should be doing this, even if you are using gui based deployment

This is where docker has really held the community back, in my opinion. It lacks automatic updates. There do exist services like watchtower to automatically update containers, but things like databases or config file schema don't get migrated to the next version, which means the next version can break things, and there is no guarantee between of stability between two versions.

This means that most users, after they use the docker-compose method recommended by software, are manually, every required to every so often, log in, and run docker compose pull and up to update. Sometimes they forget. Combine this with shodan/zoomeye (internet connected search engines), you will find plenty of people who forgot, becuase docker punches stuff through firewalls as well.

GUI's don't really make it easy to follow this promise, as well. Docker GUI's are nice, but now you have users who don't realize that Docker apps don't update, but that they probably should be doing that. Same issue with Yunohost (which doesn't use docker, which I just learned today. Interesting).

I really like Kubernetes because it lets me, do automatic upgrades (within limits), of services. But this comes at an extreme complexity cost. I have to deploy another software on top of Kubernetes to automatically upgrade the applications. And then another to automatically do some of the database migrations. And no GUI would really free me from this complexity, because you end up having to have such an understanding of the system, that requiring a pretty interface doesn't really save you.

Another commenter said:

20 years ago we were doing what we could manually, and learning the hard way. The tools have improved and by now do most of the heavy lifting for us. And better tools will come along to make things even easier/better. That’s just the way it works.

And I agree with them, but I think things kinda stalled with Docker, as it's limitations have created barriers to making things easier further. The tools that try to make things "easier" on top of docker, basically haven't really done their job, because they haven't offered auto updates, or reverse proxies, or abstracted away the knowledge required to write YAML files.

Share your project. Then you'll hear my thoughts on it. Although without even looking at it, my opinion is that if you have based it on docker, and that you have decided to simply run docker-compose on YAML files under the hood, you've kinda already fucked up, because you haven't actually abstracted away the knowledge needed to use Docker, you've just hidden it from the user. But I don't know what you're doing.

You service should have:

  • A lack of static default credentials. The best way is to autogenerate them.
    • You can also force users to set their own, but this is less secure than machine generated imo
  • Auto updates: I don't think docker-compose is going to be enough.

Further afterthoughts:

Simple in implementation is not the same thing as simple in usage. Simple in implementation means easy to troubleshoot as well, as there will be less moving parts when something goes wrong.

I think operating tech isn't really that hard, but I think there is a "fear" of technology, where whenever anyone sees a command line, or even just some prompt they haven't seen before, they panic and throw a fit.

EDIT and a few thoughts:

adding further thoughts to my second afterthought, I can provide an example: I installed an adblocker for my mom (ublock origin). It blocked a link shortening site. My mom panicked, calling me over, even though the option to temporarily unblock the site was right there, clear as day.

I think that GUI projects overestimate the skill of normal users, while underestimating the skill of those who actually use them. I know people who use a GUI for stuff like this because it's "easier", but when something under the hood breaks, they are able to go in and fix it in 5 minutes, whereas an actual beginner could spend a two weeks on it with no progress.

I think a good option is to abstract away configuration with something akin to nix-gui. It's important to note that this doesn't actually make things less "complex" or "easier" for users. All the configs, and dials they will have to learn and understand are still there. But for some reason, whenever people see "code" they panic and run away. But when it's a textbox in a form or a switch they will happily figure everything out. And then when you eventually hit them with the "HAHA you've actually been using this tool that you would have otherwise ran away from all along", they will be chill because they recognize all the dials to be the same, just presented in a different format.

Another afterthought: If you are hosting something for multiple users, you should make sure their passwords are secure somehow. Either generate and give them passwords/passphrases, or something like Authentik and single sign on where you can enforce strong passwords. Don't let your users just set any password they want.

Not at all. In fact I remember the day my server was hacked because I’d left a service running that had a vulnerability in it.

Was this server on an internal network?

I like Incus a lot, but it's not as easy to create complex virtual networksnas it is with proxmox, which is frustrating in educational/learning environments.

This is untrue, proxmox is not a wrapper around libvirt. It has it's own API and it's own methods of running VM's.

[–] moonpiedumplings@programming.dev 4 points 1 month ago* (last edited 1 month ago) (1 children)

Yes, this is where docker's limitations begin to show, and people begin looking at tools like Kubernetes, for things like advanced, granular control over the flow of network traffic.

Because such a thing is basically impossible in Docker AFAIK. You're getting these responses (and in general, responses like those you are seeing) appear when the thing a user is attempting to do is anywhere from significantly non trivial to basically impossible.

An easy way around this, if you still want to use Docker, is addressing the below bit, directly:

no isolation anymore, i.e qbit could access (or at least ping) to linkwarden’s database since they are all in the same VPN network.

As long as you have changed the default passwords for the databases and services, and kept the services up to date, it should not be a concern that the services have network level access to eachother, as without the ability to authenticate or exploit eachother, there is nothing that they can do, and there are no concerns.

If you insist on trying to get some level of network isolation between services, while continuing to use Docker, your only real option is iptables* rules. This is where things would get very painful, because iptables rules have no persistence by default, and they are kind of a mess to deal with. Also, docker implements their own iptables setup, instead of using standard ones, which result in weird setups like Docker containers bypassing the firewall when they expose ports.

You will need a fairly good understanding of iptables in order to do this. In addition to this, if you decide this in advance, I will warn you that you cannot create iptables rules based on ip addresses, as the ip addresses of docker containers are ephemeral and change, you must create rules based on the hostnames of containers, which adds further complexity as opposed to just blocking by ip. EDIT: OR, you could give your containers static ip addresses.

A good place to start is here. You will probably have to spend a lot of time learning all of the terminology and concepts listed here, and more. Perhaps you have better things to do with your time?

*Um, 🤓 ackshually it's nftables, but the iptables-nft command offers a transparent compatibility layer enabling easier migrations from the older and no longer used iptables

EDIT: And of course nobody has done this before and chatgpt isn't helpful. These problems are the kinds of problems where chatgpt/llm's begin to fall apart and are completely unhelpful. Just "no you're wrong" over and over again as you have to force your way through using actual expertise.

You can block traffic to a Docker container by its hostname using iptables, but there’s an important nuance: iptables works with IP addresses, not hostnames. So you’ll first need to resolve the container’s hostname to its IP address and then apply the rule.

You’re right—container IPs change, so matching a single IP is brittle. Here are robust, hostname-friendly ways to block a container that keep working across restarts.

Exactly — good catch. The rule: sudo iptables -I DOCKER-USER 1 -m set --match-set blocked_containers dst -j DROP matches any packet whose destination is in that set, regardless of direction, so it also drops outgoing packets from containers to those IPs.

You’re absolutely right on both points:

With network_mode: "container:XYZ", there is no “between-containers” network hop. Both containers share the same network namespace (same interfaces, IPs, routing, conntrack, and iptables). There’s nothing to firewall “between” them at L3/L2—the kernel sees only one stack.

Alright I will confess that I didn't know this. This piece of info from chatgpt changes what you want to do from "significantly non trivial" to "basically impossible". This means that containers do not have seperate ip addresses/networking for you to isolate from each other, they all share a single network namespace. You would have to isolate traffic based on other factors, like the process ID or user ID, which are not really inherently tied to the container.

As a bonus:

Docker’s ICC setting historically controls inter-container comms on bridge networks (default bridge or a user-defined bridge with enable_icc=). It doesn’t universally control every mode, and it won’t help when two containers share a netns.

Useful for understanding terminology I guess, but there is a class of these problems these tools really struggle to solve. I like to assign problems like this to people and then they will often attempt to use chatgpt at first, but then they will get frustrated and quickly realize chatgpt is not an alternative for using your brain.

view more: ‹ prev next ›