this post was submitted on 04 Nov 2024
117 points (97.6% liked)

Selfhosted

40347 readers
325 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I currently have a home server which I use a lot and has a few important things in it, so I kindly ask help making this setup safer.

I have an openWRT router on my home network with firewall active. The only open ports are 443 (for all my services) and 853 (for DoT).

I am behind NAT, but I have ipv6, so I use a domain to point to my ipv6, which is how I access my serves when I am not on lan and share stuff with friends.

On port 443 I have nginx acting as a reverse proxy to all my services, and on port 853 I have adguardhome. I use a letsencrypt certificate with this proxy.

Both nginx, adguardhome and almost all of my services are running in containers. I use rootless podman for containers. My network driver is pasta, and no container has "--net host", although the containers can access host services because they have the option "--map-guest-addr" set, so I don't know if this is any safer then "--net host".

I have two means of accessing the server via ssh, either password+2fa or ssh key, but ssh port is lan only so I believe this is fine.

My main concern is, I have a lot of personal data on this server, some things that I access only locally, such as family photos and docs (these are literally not acessible over wan and I wouldnt want them to be), and some less critical things which are indeed acessible externally, such as my calendars and tasks (using caldav and baikal), for exemple.

I run daily encrypted backups into OneDrive using restic+backrest, so if the server where to die I believe this would be fine. But I wouldnt want anyone to actually get access to that data. Although I believe more likely than not an invader would be more interested in running cryptominers or something like that.

I am not concerned about dos attacks, because I don't think I am a worthy target and even if it were to happen I can wait a few hours to turn the server back on.

I have heard a lot about wireguard - but I don't really understand how it adds security. I would basically change the ports I open. Or am I missing something?

So I was hoping we could talk about ways to improve my servers security.

you are viewing a single comment's thread
view the rest of the comments
[–] ShortN0te@lemmy.ml 1 points 2 weeks ago (1 children)

During that time, your data is encrypted but you don't know because when you open a file, your computer decrypts it and shows you what you expect to see.

First time i hear of that. You sure? Would be really risky since you basically need to hijack the complete Filesystem communication to do that. Also for that to work you would need the private and public key of the encryption on the system on run time. Really risky and unlikely that this is the case imho.

[–] miau@lemmy.sdf.org 1 points 2 weeks ago (1 children)

I don't know much about ransomware but thats what got me concerned. I always assumed if I were to be infected, restic would just create a new snapshot for the files and Id be able to restore after nuking the server.

[–] ShortN0te@lemmy.ml 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I doubt that this is the case, whether it is encrypted or not. The complexity and risks involved with decrypting it on the fly is really unrealistic and unheard of by me (have not heard of everything but still)

Also the ransomware would also need to differentiate between the user and the backup program. When you do differentiated backups(like restic) with some monitoring you also would notice the huge size of the new data that gets pushed to your repo.

Edit: The important thing about your backup is, to protect it against overwrites and deletes and have different admin credentials that are not managed by the AD or ldap of the server that gets backed up.

[–] miau@lemmy.sdf.org 1 points 2 weeks ago (2 children)

I see, I appreciate you sharing your knowledge on the matter.

Yeah I thoght about the spike in size, which I would definetely notice because the amount of data is pretty stable and I have limited cloud storage.

Regarding your last point, I currently have everything under a user account: the data I am backing up, the applications and restic itself all run on the same user account. Would it be a good ideia to run restic as root? Or as a different service account?

[–] ShortN0te@lemmy.ml 2 points 2 weeks ago (1 children)

You want your backup functional even if the system is compromised so yes another system is required for that, or through it to the cloud. Important that you do not allow deleting or editing of the backup even if the credentials used for backing up are compromised. Basically an append only storage.

Most Cloud Storage like S3 Amazon (or most other S3 compatible providers like backblaze) offer such a setting.

[–] miau@lemmy.sdf.org 1 points 2 weeks ago

Oh, now I get what you mean, thanks for the explanation

Yeah it makes sense, I had originally gone with onedrive for the much cheaper price but I will take a look into s3 compatible storage and consider migrating in the future.

good ideia to run restic as root

As a general rule, run absolutely nothing as root unless there's absolutely no other way to do what you're trying to do. And, frankly, there's maybe a dozen things that must be root, at most.

One of the biggest hardening things you can do for yourself is to always, always run everything as the lowest privilege level you can to accomplish what you need.

If all your data is owned by a user, run the backup tool as that user.

If it's owned by several non-priviliged users, then you want to make sure that the group permissions let you access it.

As a related note, this also applies to containers and software you're running: you shouldn't run docker containers as root unless they specifically MUST have a permission that only root has, and I personally don't run internet facing ones as the same user as all the others: if something gets popped, then they not only do not have root permissions, but they're also siloed into their own data in the event of a container escape.

My expectation is that, at some point, I'll miss a CVE and get pwnt, so the goal is to reduce how much damage someone can do when that happens, rather than assume I'm going to be able to keep it from happening at all, so everything is focused on 'once this is compromised, how can i make the compromise useless to the attacker'.