this post was submitted on 17 Apr 2025
22 points (92.3% liked)

Selfhosted

46113 readers
1056 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Edit: it seems like my explanation turned out to be too confusing. In simple terms, my topology would look something like this:

I would have a reverse proxy hosted in front of multiple instances of git servers (let's take 5 for now). When a client performs an action, like pulling a repo/pushing to a repo, it would go through the reverse proxy and to one of the 5 instances. The changes would then be synced from that instance to the rest, achieving a highly available architecture.

Basically, I want a highly available git server. Is this possible?


I have been reading GitHub's blog on Spokes, their distributed system for Git. It's a great idea except I can't find where I can pull and self-host it from.

Any ideas on how I can run a distributed cluster of Git servers? I'd like to run it in 3+ VMs + a VPS in the cloud so if something dies I still have a git server running somewhere to pull from.

Thanks

you are viewing a single comment's thread
view the rest of the comments
[–] marauding_gibberish142@lemmy.dbzer0.com 4 points 4 days ago (2 children)

Thank you. I did think of this but I'm afraid this might lead me into a chicken and egg situation, since I plan to store my Kubernetes manifests in my git repo. But if the Kubernetes instances go down for whatever reason, I won't be able to access my git server anymore.

I edited the post which will hopefully clarify what I'm thinking about

[–] slazer2au@lemmy.world 5 points 4 days ago (1 children)

I would have a standalone Forgejo server to act as your infrastructure server. Make it separate from your production k8s/k3s environment.
If something knocks out your infrastructure Forgejo instance then your prod instance will continue to work. If something knocks out your prod, then your infrastructure instance is still there to pull on.

One of the reasons I suggest k8s/k3s if something happens k8s/k3s will try to automatically bring the broken node back online.

You mean have two git servers, one "PROD" and one for infrastructure, and mirror repos in both? I suppose I could do that, but if I were to go that route I could simply create 5 remotes for every repo and push to each individually.

For the k8s suggestion - what happens when my k8s cluster goes down, taking my git server along with it?

[–] Zwuzelmaus@feddit.org 3 points 4 days ago (1 children)

chicken and egg situation, since I plan to store my Kubernetes manifests in my git repo

Not really.

K8s would use a "checked-out" visible representation, not the repo database itself.

[–] marauding_gibberish142@lemmy.dbzer0.com 2 points 4 days ago (1 children)

Sorry, I don't understand. What happens when my k8s cluster goes down taking my git server with it?

[–] Zwuzelmaus@feddit.org 3 points 4 days ago (1 children)

You do not let your k8s control instance look "live" at your git server during the start (or reformation) of the whole cluster. It needs the (repo and) files checked out somewhere locally, and this local "somewhere" must exist at start time.

Later, when your git is alive, you do a regular git pull for keeping it up to date.

Oh I get it. Auto-pull the repos to the master nodes' local storage for if something bad happens, and when that does, use the automatically pulled (and hopefully current) code to fix what broke.

Good idea