this post was submitted on 22 Jul 2024
30 points (94.1% liked)

Selfhosted

40347 readers
365 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

How do i you decide whats safe to run

I recently ran Gossa on my home server using Docker, mounting it to a folder. Since I used rootless Docker, I was curious - if Gossa were to be a virus, would I have been infected? Have any of you had experience with Gossa?

top 32 comments
sorted by: hot top controversial new old
[–] kevincox@lemmy.ml 14 points 4 months ago (1 children)

Docker (and Linux containers in general) are not a strong security boundary.

The reason is simply that the Linux kernel is far too large and complex of an interface to be vulnerability free. There are regular privilege escalation and container escapes found. There are also frequent Docker-specific container escape vulnerabilities.

If you want strong security boundaries you should use a VM, or even better separate hardware. This is why cloud container services run containers from different clients in different VMs, containers are not good enough to isolate untrusted workloads.

if Gossa were to be a virus, would I have been infected?

I would assume yes. This would require the virus to know an unpatched exploit for Linux or Docker, but these frequently appear. There are likely many for sale right now. If you aren't a high value target and your OS is fully patched then someone probably won't burn an exploit on you, but it is entirely possible.

[–] possiblylinux127@lemmy.zip 4 points 4 months ago (1 children)

While docker isn't perfect saying it is completely insecure is untrue. It is true serious vulnerabilities popup once and a while but to say that it is trivial to escape a container is to big of a statement to be true. You can misconfigure a docker container which would allow for an escape but that's about it for the most part. The Linux kernel isn't easy to exploit as if it was it wouldn't be used so heavily in security sensitive environments.

For added security you could use podman with a dedicated user for sandboxing. If the podman container is breached it will have little place to go. Also Podman tends to have better isolation in general. There isn't any way to break out of a properly configured docker container right now but if there were it would mean that an attacker has root

[–] kevincox@lemmy.ml 6 points 4 months ago (2 children)

I never said it was trivial to escape, I just said it wasn't a strong security boundary. Nothing is black and white. Docker isn't going to stop a resourceful attacker but you may not need to worry about attackers who are going to spend >$100k on a 0-day vulnerability.

The Linux kernel isn’t easy to exploit as if it was it wouldn’t be used so heavily in security sensitive environments

If any "security sensitive" environment is relying on Linux kernel isolation I don't think they are taking their sensitivity very seriously. The most security sensitive environments I am aware of doing this are shared hosting providers. Personally I wouldn't rely on them to host anything particularly sensitive. But everyone's risk tolerance is different.

use podman with a dedicated user for sandboxing

This is only every so slightly better. Users have existed in the kernel for a very long time so may be harder to find bugs in but at the end of the day the Linux kernel is just too complex to provide strong isolation.

There isn’t any way to break out of a properly configured docker container right now but if there were it would mean that an attacker has root

I would bet $1k that within 5 years we find out that this is false. Obviously all of the publicly known vulnerabilities have been patched. But more are found all of the time. For hobbyist use this is probably fine, but you should acknowledge the risk. There are almost certainly full kernel-privilege code execution vulnerabilities in the current Linux kernel, and it is very likely that at least one of these is privately known.

[–] possiblylinux127@lemmy.zip 1 points 4 months ago (2 children)

I think speculation is generally a bad security practice. What you need is least privilege and security in depth. At some point you need to trust something somewhere. Kernel level exploits are very rare

[–] kevincox@lemmy.ml 5 points 4 months ago (1 children)

I think assuming that you are safe because you aren't aware of any vulnerabilities is bad security practice.

Minimizing your attack surface is critical. Defense in depth is just one way to minimize your attack surface (but a very effective one). Putting your container inside a VM is excellent defense in depth. Putting your container inside a non-root user barely is because you still have one Linux kernel sized hole in your swiss-cheese defence model.

[–] possiblylinux127@lemmy.zip -1 points 4 months ago (1 children)

How is the Linux kernel more insecure than anything else? It isn't this massive gapping hole like you make it sound. In 20 years how many serious organization destroying vulnerabilities have there been? It is pretty solid.

I guess we should all use whatever proprietary software thing you think is best

[–] kevincox@lemmy.ml 4 points 4 months ago

The Linux kernel is less secure for running untrusted software than a VM because most hypervisors have a far smaller attack surface.

how many serious organization destroying vulnerabilities have there been? It is pretty solid.

The CVEs differ? The reasons that most organizations don't get destroyed is that they don't run untrusted software on the same kernels that process their sensitive information.

whatever proprietary software thing you think is best

This is a ridiculous attack. I never suggested anything about proprietary software. Linux's KVM is pretty great.

[–] Lemongrab@lemmy.one 4 points 4 months ago (1 children)

It is not speculation, it is reducing attack surface. Security is preemptive. Docker/Podman are not strong isolation solutions. Rare does not mean we shouldn't protect against the chance of kernel vulnerabilities. The linux kernel around 30 million lines of code long and written in a memory unsafe language. Code isn't safe just because we dont know the vulnerabilities, this is basic cybersec reasoning.

[–] possiblylinux127@lemmy.zip -4 points 4 months ago (1 children)

Write me an exploit then

If it so insecure prove it

[–] Lemongrab@lemmy.one 3 points 4 months ago* (last edited 4 months ago)

That is not how security works. You must protect against known and unknown attack vectors. I am only pointing out weaknesses of Docker and other linux containers that share the kernel with the host or/and run with Root. I'm not saying anything original or crazy, just read up on the security of these technologies and their limits. I am not a malware designer, I am a security researcher.

Look into gVisor and Kata Containers for info on how to improve the security of containers.

Here are some readings for you:

https://redlib.tux.pizza/r/docker/comments/eakd50/help_can_i_safely_run_malware_inside_a_container/
https://www.csoonline.com/article/1303004/vulnerabilities-in-docker-other-container-engines-enable-host-os-access.html
https://www.panoptica.app/research/7-ways-to-escape-a-container
https://blog.trailofbits.com/2019/07/19/understanding-docker-container-escapes/
https://www.securityweek.com/leaky-vessels-container-escape-vulnerabilities-impact-docker-others/
https://www.cybereason.com/blog/container-escape-all-you-need-is-cap-capabilities

[–] loudwhisper@infosec.pub 1 points 3 months ago

Also hypervisors get escape vulnerabilities every now and then. I would say that in a realistic scale of difficulty of escape, a good container (doesn't matter if using Docker or something else) is a good security boundary.

If this is not the case, I wonder what your scale extremes are.

A good container has very little attack surface, since it can have almost no code or tools available, a read-only fs, no user privileges or capabilities whatsoever and possibly even a syscall filter. Sure, the kernel is the same but then the only alternative is to split that per application VMs-like) and you move the problem to hypervisors.

In the context of this asked question, I think the gains from reducing the attack surface are completely outweighed from the loss in functionality and waste of resources.

[–] MaggiWuerze@feddit.org 6 points 4 months ago

Honestly security is not the main reason I use containers, but ease of use. Docker (or containerization in general) makes it really easy to keep a clean host system when you regularly try out new services, there's no baggage left behind when you remove a container and once you remove their mount/volume, you are usually rid of them pretty cleanly. Additionally it makes migration to new machines/distros way easier and less time consuming.

I don't rely on docker seperation to keep my machine safe, although I probably could

[–] Lemongrab@lemmy.one 5 points 4 months ago (2 children)

Idk how to decide what is safe or not, but as a warning, Docker containers can escape trivially and have access to the kernel.

[–] just_another_person@lemmy.world 5 points 4 months ago (1 children)

This is not true. Perhaps on an already at-risk or exploitable machine, but even then it's not trivial, and this is not a widespread thing that happens everywhere all the time

[–] kevincox@lemmy.ml 8 points 4 months ago

It is. Privilege escalation vulnerabilities are common. There is basically a 100% chance of unpatched container escapes in the Linux kernel. Some of these are very likely privately known and available for sale. So even if you are fully patched a resourceful attacker will escape the container.

That being said if you are a low-value regular-joe patching regularly, the risk is relatively low.

[–] verstra@programming.dev 2 points 4 months ago (2 children)

Can you expand on this wild claim? The whole point of containers is isolation so what you are saying is that containers fail at that all the time?

[–] asap@lemmy.world 6 points 4 months ago (1 children)

They might be talking about posts like this (which I would love to have refuted, as this kind of info has so far kept me from using Docker significantly):

https://security.stackexchange.com/a/169649

[–] ancoraunamoka@lemmy.dbzer0.com 1 points 4 months ago

There is nothing to refute, 100% correct

[–] Lemongrab@lemmy.one 0 points 4 months ago* (last edited 4 months ago)

Docker/Podman and LXC linux containers share the same kernel with the host machine. Root in the container is root period (in the case of rootfull containers). Even without root, much of the data on your machine is readable from any user. With a exploit to escape the container (which are common) the malicious program has root on the machine. This is a known attack vector against linux containers. VMs are much better for isolating untrusted software from the host OS.

[–] chameleon@fedia.io 3 points 4 months ago (1 children)

Personally, I do believe that rootless Docker/Podman have a strong enough security boundary for personal/individual self-hosting where you have decent trust in the software you're running. Linux privilege escalation and container escape exploits fetch decent amounts of money on the exploit market, and nobody's gonna waste them on some people running software ending in *arr when Zerodium will pay five figures for a local privilege escalation or container escape. If you're running a business or you might be targeted for whatever reason (journalist or whatever) then that doesn't apply.

If you want more security, there are container runtimes that do cooler security stuff under the hood, like Firecracker/Kata Containers implementing a managed VM, or Google's gVisor which very strongly intercepts kernel syscalls and essentially reimplements Linux in userspace. Those are used by AWS and Google Cloud respectively. You can integrate those into Docker, though not all networking/etc options are supported.

[–] kevincox@lemmy.ml 1 points 3 months ago

where you have decent trust in the software you’re running.

I generally say that containers and traditional UNIX users are good enough isolation for "mostly trusted" software. Basically I know that they aren't going to actively try to escalate their privilege but may contain bugs that would cause problems without any isolation.

Of course it always depends on your risk. If you are handing sensitive user data and run lots of different services on the same host you may start to worry about remote code execution vulnerabilities and will be interested in stronger isolation so that a RCE in any one service doesn't allow escalation to access all data being processed by other services on the host.

[–] hperrin@lemmy.world 3 points 4 months ago* (last edited 4 months ago) (1 children)

Nothing is safe to run unless you write it yourself. You just have to trust the source. Sometimes that’s easy, like Red Hat, and sometimes that’s hard. Sometimes it bites you in the ass, and sometimes it doesn’t.

Docker is a good way to sandbox things, just be aware of the permissions and access you give a container. If you give it access to your network, that’s basically like letting the developer connect their computer to your wifi. It’s also not perfect, so again, you have to trust the source. Do some research, make sure they’re trustworthy.

[–] Mora@pawb.social 1 points 4 months ago

You just have to trust the source. Sometimes that’s ~~easy, like~~ Red Hat, and ~~sometimes~~ that’s hard.

FTFY

[–] j4k3@lemmy.world 2 points 4 months ago

Everything I run is behind a whitelist firewall on an external device largely for this reason, but also learning curiosity.

[–] possiblylinux127@lemmy.zip 2 points 4 months ago (1 children)
[–] miau@lemmy.sdf.org 2 points 4 months ago (1 children)

I dont get the downvotes. If op is into containers and security, podman sure is worth considering.

[–] kevincox@lemmy.ml 2 points 3 months ago

IMHO it doesn't majorly change the equation. Plus in general a single-word comment is not adding much to the discussion. I like Podman and use it over Docker, but in terms of the original question I think my answer would be the same if OP was using Podman.

[–] exu@feditown.com 2 points 4 months ago

Even without escaping the container a lot of stuff can be done. Maybe the program includes a cryptominer or acts as a node in a botnet.

There's no way to be sure unless you verify the source yourself.

[–] just_another_person@lemmy.world 1 points 4 months ago* (last edited 4 months ago) (1 children)

Containers are isolated from the host by default. If you give a container a mount, it can only interact with the mount, but not the running host. If you further isolated and protected that mount, you would have been fine. Since you ran it as your unprivileged user, it's one step safer from being able to hijack other parts of the machine, and if it was a "virus", all it could do is write files to the mount and fill up your disk I guess, or drop a binary and hope you execute it.

[–] asap@lemmy.world 6 points 4 months ago* (last edited 4 months ago) (2 children)

Containers are isolated from the host by default.

Are you certain about that? My understanding is that Docker containers are literally just processes running on the host (ideally rootless), but with no isolation in the way that VMs are isolated from the host.

If you have some links for further reading it would be great, as I have been extremely cautious with my Docker usage so far.

I haven't found anything to refute this, but this post from 2017 states:

In 2017 alone, 434 linux kernel exploits were found, and as you have seen in this post, kernel exploits can be devastating for containerized environments. This is because containers share the same kernel as the host, thus trusting the built-in protection mechanisms alone isn’t sufficient.

If someone exploits a kernel bug inside a container, they exploited it on the host OS. If this exploit allows for code execution, it will be executed on the host OS, not inside the container.

If this exploit allows for arbitrary memory access, the attacker can change or read any data for any other container.

[–] kevincox@lemmy.ml 1 points 3 months ago

There is definitely isolation. In theory (if containers worked perfectly as intended) a container can't see any processes from the host, sees different filesystems, possibly a different network interface and basically everything else. There are some things that are shared like CPU, Memory and disk space but these can also be limited by the host.

But yes, in practice the Linux kernel is wildly complex and these interfaces don't work quite as well as intended. You get bugs in permission checks and even memory corruption and code execution vulnerabilities. This results in unintended ways for code to break out of containers.

So in theory the isolation is quite strong, but in practice you shouldn't rely on it for security critical isolation.

[–] possiblylinux127@lemmy.zip 1 points 4 months ago

The Linux kernel recently became a CVE numbering authority. That means that there are now tons of CVEs coming out but the overwhelming majority aren't easily exploitable. They can be rated pretty high with no actual impact. Furthermore, a lot of them require a very specific setup with specific kernel components. It is best to look at the exploitablity score and the recommended CISA actions.