Etesync maybe?
Providing links like this on a forum sounds like a trap, it's sad that you got so many downvotes for the lack of explanation (as given in comments).
A few more questions remain... Why did you program this? As in, how is this different or better than the alternatives?
There are so many! IMHO that's a problem, as a user I don't know how to decide..!
No need of VPN. But it wouldn't harm if you wanted to have more privacy
I'm using github.com/mag37/dockcheck for this, with its "-d N" argument. There's a tradeoff between stability and security, you need to decide for yourself. It will also depend on what services you're hosting. For example, nextcloud and immich would be disastrous under such a regime.
we should definitively have a wiki (though people should use "search" too, I wonder if a wiki would help really). This "topic" comes every month. I have posted this already, here it goes again: https://github.com/anderspitman/awesome-tunneling
Perhaps a chronological view is a bonus of the idea lives on for long enough. And having links between stories, or tags can be useful at some point too... https://www.usememos.com/
I downloaded huge sets from internet archive.
Did you check the md5? If they were different, it could be anything (it doesn't imply modifications from massgrave either)
As for logic, bad actors would do anything. Even hijacking a server and replacing the ISOs of other hackers, or infiltrate the group and do so.
Ah, docker-mailserver and delta.chat could also be great for your case!!
E2E is complicated, if you self-host for a group, having TLS and encrypting data at rest (storage) may be enough. Get a threat model. That being said, I would recommend snikket.org which is a superset of extensions over XMPP which is the open source IM that was the base of almost every app out there. Matrix and Rocket are both alright too. Depends too on your resources, synapse requires too much RAM (or so I heard)
Syncthing because it's p2p/ local-first. Meaning it's robust to interruptions.
Have you tried ollama ? Some (if not all) models would do inference just fine with your current specs. Of course, it all depends on how many queries per unit of time you need. And if you wanted to load a huge codebase and pass it as input. Anyway, go try out.