

Oh I get it. Auto-pull the repos to the master nodes’ local storage for if something bad happens, and when that does, use the automatically pulled (and hopefully current) code to fix what broke.
Good idea
Oh I get it. Auto-pull the repos to the master nodes’ local storage for if something bad happens, and when that does, use the automatically pulled (and hopefully current) code to fix what broke.
Good idea
Downvoted. You didn’t read the rules of the sub. Yes, you should be allowed to do it, morally speaking. But if you red their notice you’ll realise exactly why you aren’t allowed to do so. Anyway welcome to Lemmy
Well it’s a tougher question to answer when it’s an active-active config rather than a master slave config because the former would need minimum latency possible as requests are bounced all over the place. For the latter, I’ll probably set up to pull every 5 minutes, so 5 minutes of latency (assuming someone doesn’t try to push right when the master node is going down).
I don’t think the likes of Github work on a master-slave configuration. They’re probably on the active-active side of things for performance. I’m surprised I couldn’t find anything on this from Codeberg though, you’d think they have already solved this problem and might have published something. Maybe I missed it.
I didn’t find anything in the official git book either, which one do you recommend?
Thanks for the comment. There’s no special use-case: it’ll just be me and a couple of friends using it anyway. But I would like to make it highly available. It doesn’t need to be 5 - 2 or 3 would be fine too but I don’t think the number would change the concept.
Ideally I’d want all servers to be updated in real-time, but it’s not necessary. I simply want to run it like so because I want to experience what the big cloud providers run for their distributed git services.
Thanks for the idea about update hooks, I’ll read more about it.
Well the other choice was Reddit so I decided to post here (Reddit flags my IP and doesn’t let me create an account easily). I might ask on a couple of other forums too.
Thanks
This is a fantastic comment. Thank you so much for taking the time.
I wasn’t planning to run a GUI for my git servers unless really required, so I’ll probably use SSH. Thanks, yes that makes the part of the reverse proxy a lot easier.
I think your idea of having a designated “master” (server 1) and having rolling updates to the rest of the servers is a brilliant idea. The replication procedure becomes a lot easier this way, and it also removes the need for the reverse-proxy too! - I can just use Keepalived, set up weights to make one of them the master and corresponding slaves for failover. It also won’t do round-robin so no special stuff for sticky sessions! This is great news from the perspective of networking for this project.
Hmm, you said to enable pushing repos to the remote git repo instead of having it pull? I was going create a wireguard tunnel and have it accessible from my network for some stuff but I guess it makes sense.
Thanks again for the wonderful comment.
I thought Hetzner was the size of OVH. I guess not
Sorry, I don’t understand. What happens when my k8s cluster goes down taking my git server with it?
I think I messed up my explanation again.
The load-balancer in front of my git servers doesn’t really matter. I can use whatever, really. What matters is: how do I make sure that when the client writes to a repo in one of the 5 servers, the changes are synced in real-time to the other 4 as well? Running rsync every 0.5 second doesn’t seem to be a viable solution
You mean have two git servers, one “PROD” and one for infrastructure, and mirror repos in both? I suppose I could do that, but if I were to go that route I could simply create 5 remotes for every repo and push to each individually.
For the k8s suggestion - what happens when my k8s cluster goes down, taking my git server along with it?
GitHub didn’t publish the source code for their project, previously known as DGit (Distributed Git), now known as spokes. The only mention of it is in a blog post on their website but I don’t have the link handy right now
Thank you. I did think of this but I’m afraid this might lead me into a chicken and egg situation, since I plan to store my Kubernetes manifests in my git repo. But if the Kubernetes instances go down for whatever reason, I won’t be able to access my git server anymore.
I edited the post which will hopefully clarify what I’m thinking about
Apologies for not explaining better. I want to run a loadbalancer in front of multiple instances of a git server. When my client performs an action like a pull or a push, it will go to one of the 5 instances, and the changes will then be synced to the rest.
I have edited the post to hopefully make my thoughts a bit more clear
Apologies for not explaining it properly. Essentially, I want to have multiple git servers (let’s take 5 for now), have them automatically sync with each other and run a loadbalancer in front. So when a client performs an action with a repository, it goes to one of the 5 instances and the changes are written to the rest.
I have edited the post, hopefully the explanation makes more sense now
Hetzner?
You can never be private with any device that can connect to the internet out of its own volition. Ubiquity, Alta Labs and Mikrotik should never be trusted unless you’re OK with your data potentially ending up on their servers.
With that said, you can manually upgrade Mikrotik software and selfhost the Mikrotik CHR, Ubiquity controller and Alta Labs controller for a fee (for the latter), which should then in theory invalidate this argument. Even then, I do not trust non-FOSS software for such critical infrastructure so it’s still too much for me, but depending on your risk tolerance this might be a good compromise. I would suggest you to look at Mikrotik seriously - their UI might suck but their hardware and software capabilities are FAR beyond what Ubiquity offers for the same price.
If you want to be private you should get an old computer, buy quad port NIC cards from EBay and run a Linux/BSD router on your own hardware. But that’s not the most friendly way to do it so I don’t blame anyone for looking away
I’d donate to them if someone is willing
How about if I only use the web version?
Upvoted. Awesome project
deleted by creator