this post was submitted on 28 Jan 2024
44 points (95.8% liked)

Selfhosted

40296 readers
239 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I've gotten to the point where I have more than a few servers in my homelab and am looking for a way to increase reliability in case of an update. Two problems: 2 of the servers will be on Wifi and one is a Synology NAS. I can't do any wiring but I can put together a WiFi 6E network for the servers only, That means buying 4 Wifi 6E devices in a mix of types. As for Synology, it's container manager is a little odd so I expect to run a Linux VM and use that as my cluster node. That may mean buying more RAM as I haven't upgraded it. Hardware ranges from a 6 core CPU on the NAS (with a few important docker containers), 8 core on my main SFF server (which also runs my OpnSense VM inside Proxmox), 16 core Ryzen on my old big server, and a 10 year old NUC for fun. So the question is what do I use to orchestrate all the services I have. My Vaulwarden runs reliability but only on one system. I want better reliability for Pihole that automatically syncs settings. The NAS' docker implementation doesn't support gravity sync. Since everything I do runs in docker besides storage it seems Proxmox clusters is not the best option. That puts me between K8s and Docker Swarm. I'd like something that is simple to administer but resilien when hardware fails.

all 21 comments
sorted by: hot top controversial new old
[–] testfactor@lemmy.world 16 points 9 months ago

I'd rule out k8s if you're looking for simple administration.

[–] jgkawell@lemmy.world 8 points 9 months ago

The solutions you've mentioned aren't exactly equivalent. Proxmox is a hypervisor while Docker Swarm and Kubernetes are container orchestration engines. For example, I use Proxmox in a highly available cluster running on three physical nodes. Then I have various VMs and LXC containers running on those nodes. Some of those VMs are Kubernetes nodes running many Docker containers.

I highly recommend Proxmox as it makes it trivial to spin up new containers and VMs when you want to test something out. You can create and destroy VMs in an instant without messing with any of your actual hardware. That's the power of a good hypervisor.

For orchestration, I would actually recommend you just stick with Docker Compose if you want something very simple to manage. Resiliency or high-availability usually brings with it a lot of overhead (both in system resources as well as maintenance costs) which may not be worth it to you. If you want something simple, Proxmox can run VMs in a highly-available mode so you could have three Proxmox nodes and set any VMs you deem essential to be highly-available within the cluster.

For my set up, I have certain services that are duplicated between multiple Proxmox nodes and then I use failover mechanisms like floating IP addresses to automatically switch things over when a node goes down. I also run most things in Kubernetes which is deployed in a highly-available manner across multiple Proxmox nodes so that I can lose a physical node and still keep (most) of my services running. This however is overkill for most things and I really only do it because I use my homelab to learn and practice different techniques.

[–] domi@lemmy.secnd.me 7 points 9 months ago* (last edited 9 months ago) (1 children)

Definitely go with K3s instead of K8s if you want to go the Kubernetes route. K8s is a massive pain in the ass to setup. Unless you want to learn about it for work I would avoid it for homelab usage.

I currently run Docker Swarm nodes on top of LXCs in Proxmox. Pretty happy with the setup except that I can't get IPv6 to work in Docker overlay networks and the overlay network performance leaves things to be desired.

I previously used Rancher to run Kubernetes but I didn't like the complexity it adds for pretty much no benefit. I'm currently looking into switching to K3s to finally get my IPv6 stack working. I'm so used to docker-compose files that it's hard to get used to the way Kubernetes does things though.

[–] c10l@lemmy.world 0 points 9 months ago* (last edited 9 months ago)

K3s is k8s

lol at the downvote. K3s is k8s. The very first 2 words in its website are Lightweight Kubernetes. https://k3s-io.github.io/

[–] 1984@lemmy.today 6 points 9 months ago

You really should look into Nomad: https://developer.hashicorp.com/nomad/docs/nomad-vs-kubernetes

I sat up a nomad cluster in my home lab just a few days ago, on top of instances in proxmox. Works really well and is simple to maintain and understand.

[–] possiblylinux127@lemmy.zip 4 points 9 months ago (3 children)

First off, replace WiFi with Ethernet. Seriously, it will be way more reliable. There are plenty of janky adapters that will work fine.

Once you have that done you can setup a Proxmox cluster. Proxmox won't be a good experience with WiFi.

[–] rambos@lemm.ee 4 points 9 months ago

OP can try power line if nothing else works

[–] johnnixon@lemmy.world 1 points 9 months ago

Late reply but yeah, Wifi was a nightmare on Proxmox. It was a tiny e-waste SFF pc so I was able to wedge it near the other servers. The cluster is happy.

[–] knusprig@feddit.de 1 points 9 months ago* (last edited 9 months ago)

Not sure if this is what you mean, but I have my proxmox server connected via a WiFi bridge and there have been zero issues in the past year that I’m using it. Set it up to try and see if it works and haven’t had to change it so I didn’t. Mainly home assistant on the server, no media streaming or other realtime stuff, so that may be why.

[–] synae@lemmy.sdf.org 3 points 9 months ago

I run k8s, mostly because I use it for work and really enjoy the gitops approach to management. Previously I used docker compose.

[–] iluminae@lemmy.world 3 points 9 months ago (2 children)

I use k8s at work a lot - I choose to use Nomad at home, you may want to add that to your shortlist.

[–] diminou@lemmy.zip 1 points 9 months ago

Thank you for Nomad, will give this one a try at home!

[–] jelloeater85@lemmy.world 1 points 9 months ago (2 children)

How you liking it? Seemed a little hard to learn to me, and I do TF and Ansible.

[–] 1984@lemmy.today 2 points 9 months ago

I thought it was quite simple but it depends on your experience of course. It's a single binary and a single config file, so I felt it was soo much simpler.

You can buy a good udemy course for 10 dollars too which really helps in the beginning.

[–] iluminae@lemmy.world 1 points 9 months ago

Yea it's very easy to learn enough to run, it has built-in service discovery and secrets now, and writing parameterized jobs feels so much nicer than a helm chart in k8s.

10/10, would orchestrate again

[–] johntash@eviltoast.org 2 points 9 months ago

+1 for Nomad. Ive used k8s a lot and still use it, but i prefer Nomad for home purposes. You dont even need a consul cluster to run it anymore so it's pretty simple to start.

[–] lempa@group.lt 2 points 9 months ago* (last edited 9 months ago)
[–] Decronym@lemmy.decronym.xyz 2 points 9 months ago* (last edited 9 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
HA Home Assistant automation software
~ High Availability
IP Internet Protocol
LXC Linux Containers
k8s Kubernetes container management package

4 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

[Thread #456 for this sub, first seen 28th Jan 2024, 07:15] [FAQ] [Full list] [Contact] [Source code]

[–] khorak@lemmy.dbzer0.com 1 points 9 months ago

Wifi pretty much excludes k*s and I assume that swarm and Nomad would be impacted by blips in the wireless connectivity. You can try how things work out with a load balancer / reverse proxy on a wired connection, which then checks the downstream services and routes the request to available instances.

Please look into Wifi-specific issues related to the various orchestration platforms before deciding to try one out. Hypervisor is usually a win win, until you try to do failover.

[–] brygphilomena@lemmy.world 1 points 9 months ago

Consider power line adapters instead of wifi.