this post was submitted on 27 May 2024
22 points (95.8% liked)

homelab

6648 readers
6 users here now

founded 4 years ago
MODERATORS
 

Hi, so I have a very individual homelab. It's a collection of stuff accumulated over nearly 30 years of doing weird stuff.

For the past 9 years it's been running as a bunch of lxc containers (privileged because unprivileged did not exist, back then) but several of those containers are p2v conversions of physical hosts dating back to debian woody and earlier. They're all upgraded to at least buster, most are bookworm. Stuff like asterisk, email, home assistant, nextcloud, matrix synapse run there these days.

The server is a 15 year old HP gen6 thing, and is getting quite long in the tooth. There's also a dedicated cheapy microserver with an i4 running opnsense on bare metal as a firewall.

Trying to run stuff like local voice stuff for home assistant is showing the HP's age quite badly. Also, our area is getting fibre, and the opnsense box is maxed out at gigabit. More speed would be nice.

So, I'm in two minds. The homelab has been a lot of fun over the years, but I'm over 50 now, I want lower maintenance. This latest wave of upgrades is making me rethink the next 20 years of homelab. I don't want to leave something stupidly "only me" if I were to die tomorrow (diabetes is a fickle bastard). My wife might want to try and carry on this thing - it runs some useful stuff around the house (but it should be noted that nothing in this house requires a server or cloud) - and that's not going to happen with the current solution.

I think I might have a path, using proxmox, from where I am now, to something that can be deployed on e.g. a bunch of ms01 class devices. I'm thinking to convert the existing HP server to proxmox, to allow me to redeploy all my existing lxc containers into the proxmox world. As I acquire hardware over the next year, I can look at a k8s migration of the services onto a small, MUCH lower power cluster. One of the keys is that I don't want to have big outages of services for days or weeks while I migrate everything so it's gotta be a rolling upgrade as it were.

I'm here soliciting feedback. Has anyone ever migrated from a deeply legacy homebrew homelab into something like this? Does it reduce the workload long term? What's the practicality of this for someone rather less tech savvy?

Thanks!

you are viewing a single comment's thread
view the rest of the comments
[–] orb360@lemmy.ca 5 points 6 months ago (10 children)

I migrated from a mix of proxmox, hyper-V, bare metal, and Synology hosted docker onto a full k8s cluster.

It is much easier to manage now, including adding or replacing nodes. Including a rebuild of the cluster from 7 rpis onto 7 elite desk mini PCs. (From arm to x86 and from Debian to Talos)

But it wasn't a small process either.

You'll have to deploy your k8s cluster, learn how to host the services you want (using a load balancer, dns setup, cluster IPs, etc), and setting up a storage provider (I use NFS to my Synology share, not the fastest or most secure but easiest)

And then you'll need to migrate your services off the old hardware onto the cluster one by one... Which means learning docker and k8s and how they work together.

There are some things that I cannot host on the cluster like zwave2mqtt which requires a physical location centralized in my house and access to a USB zwave adapter. So even then not quite 100% ended up on the cluster, it runs on docker on an rpi though. (Technically you can do this if you pin the container to a single host and pass through the USB device, but I didn't see a reason for it.)

But, service upgrades, adding new services now that I'm used to it is very easy... Expanding compute is also pretty easy. So maintenance has gone down a bunch. But it was also a decent amount of work and learning to get there.

K8s is relatively specialized knowledge compared to the general computer literate population that knows how computers generally work... So in terms of someone being able to take over your work, if they already know k8s, then it would be reasonably easy. If they don't but are savvy enough to learn it would take a bit but not be too bad. If someone doesn't already know their way around Linux and a terminal, it would probably not be possible for them to pick it up in a reasonable amount of time though.

[–] shankrabbit@lemmy.world 2 points 6 months ago (1 children)

Any tips you can give for someone who is running k8s on rpi4s and wants to switch architectures? Sounds like you did something similar and while my rpis are holding strong, I want something with a little more power like a few N100 based micro pcs.

[–] orb360@lemmy.ca 1 points 5 months ago

All the images I used already had x86 variants available. In fact, I was building and pushing my own arm variants for a few images to my own Nexus repository which I've stopped since they aren't necessary anymore.

If you are using arm only images, you'll need to build your own x86 variants and host them.

I created a brand new cluster from scratch and then setup the same storage pv/PVCs and namespaces.

Then I'd delete the workloads from the old cluster and apply the same yaml to the new cluster, and then update my DNS.

I used kubectx to swap between them.

Once I verified the new service was working I'd move to the next. Since the network storage was the same it was pretty seamless. If you're using something like rook to utilize your nodes disks as network storage that would be much more difficult.

After everything was moved I powered down the old cluster and waited a few weeks before I wiped the nodes. In case I needed to power it up and reapply a service to it temporarily.

My old cluster was k8s on raspbian but my new one was all Talos. I also moved from single control plane to 3 machines control plane. (Which is completely unnecessary, but I just wanted to try it). But that had no effect on any services.

load more comments (8 replies)