Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Yes, you’re correct here.
You begin by forwarding ports 80 and 443 to your Nginx proxy server’s external ports. These are the standard ports for http and https requests, respectively. So your Nginx will immediately be able to tell if a request is http or https based on which port it is coming in on.
Next, you would set an A name record on your domain manager. This A name record will point a subdomain to a specific IPv4 address. So for instance, maybe the name is “abs” and the IP is your home WAN IP. So whenever an http or https request comes in on “abs.{your domain}” it will get redirected to your WAN IP. If you wanted to use IPv6, that would be an AAAA name record instead… But if this is your first foray into self-hosting, you probably don’t want to use IPv6.
On Nginx’s side, it receives all of those incoming http and https requests because the ports are forwarded to it. You configure it to take requests for those subdomains, and route them to your various devices accordingly. You’ll also need to do some config for SSL certificates, which will allow https requests to resolve successfully. You can either use a single certificate for the entire site, or an individual certificate for each subdomain. Neither is “more” correct for your needs, (though I’m sure people will argue about that in responses to this).
So for instance, you send a request to
https://abs/%7Byour domain}
. The domain manager forwards this to your WAN IP on port 443. Nginx receives this request, resolves the SSL certificate, and forwards the request to the device running abs. So your ABS instance isn’t directly accessible from the net, and needs to bounce off of Nginx with a valid https request in order to be accessible.You’ll want to run something like Fail2Ban or Crowdsec to try and prevent intrusion. Fail2Ban listens to your various services’ log files, and IP-bans repeated login failures. This is to help avoid bots that find common services (like ABS) and try to brute-force them by spamming common passwords. You can configure it to do timeouts with increasing periods. So maybe the first ban is only 5 minutes, then 10, then 20, etc…
Lastly, you would probably want to run something like Cloudflare-DDNS to keep that WAN IP updated. I’m assuming you don’t have a static IP, and you don’t want your connections to break every time your IP address changes. DDNS is a system that routinely checks your WAN IP every few minutes, and pushes an update to your provider if it has changed. So if your IP address changes, you’ll only be down for (at most) 5 minutes. This will require some extra config on your provider’s part, to get an API key and to configure the DDNS service to point at your various A name records.
If you need any help setting the individual services up, let me know. I personally suggest docker-compose for setting up the entire thing (Nginx, DDNS, and Fail2Ban) as a single stack, but that’s purely because it’s what I know and it makes updates easy. But this comment is already long enough, and each individual module could be just as long.
Thank you so much this is very helpful, I'll definitely be taking a run at it with all of this advice in mind this week. When you mention running the whole thing as a single stack does that mean getting all of it running inside a single docker container such that it only takes the 1 docker run command? Is it a requirement to get them able to talk or just a more elegant way to have the entirety of the server running in a singular container instead of spread across several?
A stack is a group of containers that were all started together via a docker-compose.yml file. You can name the stack and have all of the containers dropped down below it. Compose is simply a straightforward way to ensure your containers all boot with the same parameters each time.
Instead of needing to remember all of the various arguments for each container, you simply note them in the compose file and run that. Docker Compose reads the file and runs the containers with the various arguments.
Moving from docker to docker-compose is the single largest ease-of-use change I made in my setup. If you want some help in how to use it, I can post a quick example and some instructions on setting it up. You would use
cd [directory with your docker-compose.yml]
to select the proper directory, thendocker-compose up -d
to run the compose file.Updating would be
docker-compose down
to stop the stack,docker-compose pull
to pull updated images,docker-compose up -d
to start your stack again, thendocker image prune -f
to delete the old (now outdated and unused) images.