this post was submitted on 09 Jan 2024
69 points (98.6% liked)

Selfhosted

40313 readers
185 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Currently, I run Unraid and have all of my services' setup there as docker containers. While this is nice and easy to setup initially, it has some major downsides:

  • It's fragile. Unraid is prone to bugs/crashes with docker that take down my containers. It's also not resilient so when things break I have to log in and fiddle.
  • It's mutable. I can't use any infrastructure-as-code tools like terraform, and configuration sort of just exist in the UI. I can't really roll back or recover easily.
  • It's single-node. Everything is tied to my one big server that runs the NAS, but I'd rather have the NAS as a separate fairly low-power appliance and then have a separate machine to handle things like VMs and containers.

So I'm looking ahead and thinking about what the next iteration of my homelab will look like. While I like unraid for the storage stuff, I'm a little tired of wrangling it into a container orchestrator and hypervisor, and I think this year I'll split that job out to a dedicated machine. I'm comfortable with, and in fact prefer, IaC over fancy UIs and so would love to be able to use terraform or Pulumi or something like that. I would prefer something multi-node, as I want to be able to tie multiple machines together. And I want something that is fault-tolerant, as I host services for friends and family that currently require a lot of manual intervention to fix when they go down.

So the question is: how do you all do this? Kubernetes, docker-compose, Hashicorp Nomad? Do you run k3s, Harvester, or what? I'd love to get an idea of what people are doing and why, so I can get some ideas as to what I might do.

you are viewing a single comment's thread
view the rest of the comments
[–] johntash@eviltoast.org 1 points 10 months ago (5 children)

Thanks! I'll do some testing over the weekend and see how it goes.

While I'd love to be able to use it for postgres, I figured that wouldn't work out well so probably won't try it any time soon. I do have several apps that use sqlite databases though, do you think those would have any issues? e.g. trilium, ntfy, ghost

The main downside to most of the distributed/clustered storage that I've tried is they always seem to corrupt sqlite db files due to not supporting locking or some other posix feature. Reading through some older github issues, it looks like that is something the dev of seaweedfs fixed hopefully.

[–] nico@r.dcotta.eu 2 points 10 months ago (4 children)

The problem with using seaweedfs to a back your DBs is more on the filesystem than the implementations of POSIX features. When you are writing to a file, and the connection to seaweedfs breaks (container restart, wifi, you name it), then you might end up with a half-written file. If you upload pictures, this is unlikely, but DBs are doing several writes per second usually. So it is more likely one of those gets interrupted. In my case, my grafana sqlite DB would get corrupted every other week.

What I recommend is using DBs natively in your node's filesystem, and backing them up to seaweedfs periodically instead. That way your DBs 'work' but you can get them running again, and the backup is replicated in the distributed filesystem.

[–] johntash@eviltoast.org 1 points 10 months ago (1 children)

What I do right now is I have a rclone sidecar container that uploads files in a directory every few seconds, and I also have another init sidecar that runs before the main application and downloads those files (incl sqlite dbs) to the normal disk. This works okay but feels pretty clunky and can still result in stuff getting corrupted because I'm just backing up the db files and not using any sqlite commands to actually back up the db to another file that isn't in-use first.

How do you handle a job going from one nomad node to another? Or do you pin jobs like grafana to specific hosts?

[–] nico@r.dcotta.eu 1 points 10 months ago

Nomad has host volumes - so you can tell it to mount a folder from the machine on the container, and it will only schedule that container on machines that have that folder. So yes, effectively you pin the workload, thus introducing a SPOF - I do not love it but Grafana only supports sqlite and postgres, so making those HA would require failover setups which is a bit much for a homelab :')

For backing up, you can use the sqlite command periodically (do cron job or Nomad periodic job) and then upload the backup to some external, safe storage (could be seaweedfs or S3!). For postgres you can use something like this.

load more comments (2 replies)
load more comments (2 replies)