this post was submitted on 29 Oct 2025
96 points (93.6% liked)

Selfhosted

52629 readers
1185 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I personally think of a small DIY rack stuffed with commodity HDDs off Ebay with an LVM spanned across a bunch of RAID1s. I don't want any complex architectural solutions since my homelab's scale always equals 1. To my current understanding this has little to no obvious drawbacks. What do you think?

top 29 comments
sorted by: hot top controversial new old
[–] fruitycoder@sh.itjust.works 1 points 1 hour ago

Why always scale to 1?

[–] Jason2357@lemmy.ca 8 points 7 hours ago (1 children)

Hot take: For personal use, I see no value at all in "availability," only data preservation. If a drive fails catastrophically and I lose a day waiting for a restore from backups, no one is going to fire me. No one is going to be held up in their job. It's not enterprise.

However, redundancy doesn't save you when a file is deleted, corrupted, ransom-wared or whatever. Your raid mirror will just copy the problem instantly. Snapshots and 3,2,1 backups are what are important to me because when personal data is lost, it's lost forever.

I really do think a lot of hobbyists need to focus less on highly available redundancy and more on real backups. Both time and money are better spent on that.

[–] SomethingBurger@jlai.lu 2 points 6 hours ago (1 children)

Agreed. RAID is useless. Your drives will never fail before you'd want to replace them with larger ones anyway.

[–] SparroHawc@lemmy.zip 1 points 2 hours ago* (last edited 2 hours ago)

That's true until it isn't.

Unrecoverable hard drive failures definitely occur, even early on in the life cycle of a drive. I like having a RAID-5 array ... but then again, I don't really have any other backups (which I really should fix).

What I really need is an ISP that doesn't have a 1.2TB data cap.

[–] arcayne@lemmy.today 5 points 15 hours ago

I'd recommend ZFS for most home server/NAS scenarios. Gives you everything you need, and nothing you don't.

Stuff like Ceph is just as hungry as it is powerful. The performance sweet spot for Ceph barely begins at 5 dedicated nodes (with at least a dozen drives each, ideally). I could never recommend it for home use unless you want to run it in a lab for the sake of learning.

Source: I've designed/built/deployed several 1PB+ Ceph clusters over the last ~5yrs.

[–] Devjavu@lemmy.dbzer0.com 8 points 1 day ago* (last edited 1 day ago) (1 children)

A single ssd with whatever formatting came with it, along with a webdav frontend I made myself. Very high security (confidentiality) actually, since I check for client side cert, user auth, biometrics (that's plural), behavior recognition through a custom typing website and hardware token, but the integrity could use some help. And I'm painfully aware that someone could just steal my session.

I love security.
You'll never get my duck nudes.

^In^ ^reality^ ^I^ ^just^ ^had^ ^a^ ^fun^ ^night^

[–] Devjavu@lemmy.dbzer0.com 8 points 1 day ago (1 children)

Shit I forgot to install a firewall.

[–] InnerScientist@lemmy.world 8 points 10 hours ago (1 children)

No worries, I installed it for you.

[–] Devjavu@lemmy.dbzer0.com 4 points 8 hours ago

Pfew, close one.

Wait a minute.

[–] azureskypirate@lemmy.zip 1 points 21 hours ago

I've got Proxmox running on a nvme mirror. Two HDDs are passed to Turnkey Linux mediaserver; they are mirrored with BTRFS and act as storage. I am satisfied with all (prox, turnkey, btrfs) and would recommend.

I had one BTRFS drive fail, and replacing it with no experience took about an hour.

I do wish there was better user documentation for WebDAVcgi, the WebDAV frontend in Turnkey linux mediaserver.

mediaserver comes with Samba, so I use that to connect devices like phone or laptop to the server

Turnkey's mediaserver was my replacement for Openmediavault with Filebrowser plugin. Filebrowser creates an internal user to write files for anything uploaded via web interface, so if you mount the folder later via NFS, the permissions don't match. Openmediavault would stall or crash a lot as a container and especially as a VM, but maybe it runs better on bare metal.

[–] spacemanspiffy@lemmy.world 7 points 1 day ago

I have a few Ext4 drives connected and I mount them in /etc/fstab and that's it.

I've yet to find a reason to change it.

[–] squinky@sh.itjust.works 8 points 1 day ago (2 children)
[–] melfie@lemy.lol 2 points 1 day ago

Ha, I went down the whole Ceph and Longhorn path as well, then ended up with hostPath and btrfs. Glad I’m not the only one who considers the former options too much of a headache after fully evaluating them.

[–] MrModest@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (3 children)

Why btrfs and not ZFS? In my info bubble, the btrfs has a reputation of an unstable FS and people ended up with unrecoverable data.

[–] unit327@lemmy.zip 4 points 1 day ago

Btrfs used to be easier to install because it is part of the kernel while zfs required shenanigans, though I think that has changed now.

Btrfs also just works with whatever drives of mismatched sizes you throw at it and adding more later is easy. This used to be impossible with zfs pools but I think is a feature now?

[–] ikidd@lemmy.world 3 points 1 day ago

Just the 5-6 raid modes are shit. And its weird willingness to let you boot a failed raid without letting you know a drive is borked.

[–] non_burglar@lemmy.world 3 points 1 day ago

That is apparently not the case anymore, but ZFS is certainly more rich in features and more battle-tested.

[–] pokexpert30@jlai.lu 13 points 2 days ago

Longhorn is goated to manage volume availability across geographically distant nodes.

If you're running a one-node show, hostpath will do fine (or just dont kubernetes at all, tbh)

[–] panda_abyss@lemmy.ca 20 points 2 days ago* (last edited 2 days ago) (2 children)

I set up garage, which works fine.

Advantage of an s3 style layer is it's simplicity and integration with apps.

I also use it so I can run AI agents that have zero access to a disk based system

[–] melfie@lemy.lol 2 points 1 day ago

I was considering MinIO, then evaluated Garage, then decided it wasn’t with the trouble since a lot of the things I host don’t even natively support object storage. I do use LFS with Forgejo and it would’ve made sense there, and maybe Jellyfin supporting object storage would be a tipping point.

[–] possiblylinux127@lemmy.zip 4 points 1 day ago

Just out of curiosity, what are you using S3 for?

[–] Cyber@feddit.uk 3 points 1 day ago

Backups... with LVM, if you're trying to do a full system backup (ie with clonezilla, etc) then you have to backup the whole thing - you can't backup just 1 drive.

I have a media server with 2x 2TB HDDs and 1x SSD in a LVM, split into Music, Video, TV... and the OS ... and I can backup the individual files of course, but I can't backup just the OS drive.

btrfs didn't exist when I created it, but I use it on my NAS and it's great.

I'll be rebuilding my media server one day and change LVM to btrfs.

[–] Dalraz@lemmy.ca 13 points 2 days ago

This has been my journey.

I started with pure docker and hostpath on an Ubuntu server. This worked well for me for many years and is good for most people.

Later I really wanted to learn k8s so I built a 3 node cluster with NSF managed PVC for storage, this was fantastic for learning. I enjoyed this for 3 plus years. This is all on top of proxmox and zfs

About 8 months ago I decided I'm done with my k8s learning and I wanted more simplicity in my life. I created a lxc docker and slowly migrated all my workloads back to docker and hostpath, this time backed by my mirrored zfs files system.

I guess my point is what are you hoping to get out of your journey and then tailor your solution to that.

Also I do recommend using proxmox and zfs.

[–] entropicdrift@lemmy.sdf.org 11 points 2 days ago (1 children)

I just use mergerfs and SnapRAID so I can scale dynamically when I can afford new drives. Granted it's all fully replaceable media files on my end, so I'm not obsessed with data integrity.

[–] rcmd@lemmy.world 5 points 2 days ago (2 children)

Well, this path seems to be the most appropriate for what I am for.

And more to that, both mergerfs and snapraid are available out of the box in the latest stable Debian release.

Thanks for pointing me at it!

[–] signalsayge@infosec.pub 5 points 1 day ago

This is what I'm doing as well. The nice thing about it is that it supports different sized drives in the same mergerfs mount and with snapraid, you just need to make sure one of your biggest drives is the parity drive. I've got 10 drives right now with 78TB usable in the mergerfs mount and two 14TB drives acting as parity. I've been able to build it up over the years and add slowly.

[–] entropicdrift@lemmy.sdf.org 2 points 2 days ago

Happy to help!

[–] skilltheamps@feddit.org 6 points 2 days ago

You need to ask yourself what properties you want in your storage, then you can judge which solution fits. For me it is:

  • effortless rollback (i.e. in case something with a db updates, does a db migration and fails)
  • effortless backups, that preserve database integrity without slow/cumbersome/downtime-inducing crutches like sql dump
  • a scheme that works the same way for every service I host, no tailored solutions for individual services/containers
  • low maintenance

The amount of data I'm handling fits on larger harddrives (so I don't need pools), but I don't want to waste storage space. And my homeserver is not my learn and break stuff environment anymore, but rather just needs to work.

I went with btrfs raid 1, every service is in its own subvolume. The containers are precisely referenced by their digest-hashes, which gets snapshotted together with all persistent data. So every snapshot holds exactly the amount of data that is required to do a seamless rollback. Snapper maintains a timeline of snapshots for every service. Updating is semi-automated where it does snapshot -> update digest hash from container tags -> pull new images -> restart service. Nightly offsite backups happen with btrbk, which mirrors snapshots in an incremental fashion on another offsite server with btrfs.

You can use OpenEBS to provision and manage LVM volumes. Host path requires you to manually manage the host paths.