Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Thank you. I did think of this but I'm afraid this might lead me into a chicken and egg situation, since I plan to store my Kubernetes manifests in my git repo. But if the Kubernetes instances go down for whatever reason, I won't be able to access my git server anymore.
I edited the post which will hopefully clarify what I'm thinking about
I would have a standalone Forgejo server to act as your infrastructure server. Make it separate from your production k8s/k3s environment.
If something knocks out your infrastructure Forgejo instance then your prod instance will continue to work. If something knocks out your prod, then your infrastructure instance is still there to pull on.
One of the reasons I suggest k8s/k3s if something happens k8s/k3s will try to automatically bring the broken node back online.
You mean have two git servers, one "PROD" and one for infrastructure, and mirror repos in both? I suppose I could do that, but if I were to go that route I could simply create 5 remotes for every repo and push to each individually.
For the k8s suggestion - what happens when my k8s cluster goes down, taking my git server along with it?
Not really.
K8s would use a "checked-out" visible representation, not the repo database itself.
Sorry, I don't understand. What happens when my k8s cluster goes down taking my git server with it?
You do not let your k8s control instance look "live" at your git server during the start (or reformation) of the whole cluster. It needs the (repo and) files checked out somewhere locally, and this local "somewhere" must exist at start time.
Later, when your git is alive, you do a regular git pull for keeping it up to date.
Oh I get it. Auto-pull the repos to the master nodes' local storage for if something bad happens, and when that does, use the automatically pulled (and hopefully current) code to fix what broke.
Good idea