Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Yeah, raid 5 in 2025 for a nas? A big no no
I'm new to this topic and only recently learned about RAID levels. Why is it a big no no?
I'm in the same boat. Based on the things I've learned in the last hour or two, ZFS RAIDz1 is just newer and better. Someone told me that ZFS will help prevent bit rot, which is a concern for me, so I'm assuming ZFS RAIDz1 also does this, though I haven't confirmed it yet. I'm designing my enclosure now and haven't looked into that yet.
Yup, it does that. You can run a scrub whonever you want and it'll manually check them. Or you can just open the files and it will check at runtime.
My understanding is that the only issues were the write hole on power loss for raid 5/6 and rebuild failures due to un-seen damage to surviving drives.
Issues with single drive rebuild failures should be largely mitigated by regular drive surface checks and scrubbing if the filesystem supports it. This should ensure that any single drive errors that might have been masked by raid are removed and all drives contain the correct data.
The write hole itself could be entirely mitigated since the OP is building their own system. What I mean by that is that they could include a "mini UPS" to keep 12v/5v up long enough to shut down gracefully in a power loss scenario (use a GPIO for "power good" signal). Now, back in the day we had raid controllers with battery backup to hold the cache memory contents and flush it to disk on regaining power. But, those became super rare quite some time ago now. Also, hardware raid was always a problem with getting a compatible replacement if the actual controller died.
Is there another issue with raid 5/6 that I'm not aware of?
That's a fuckin great idea.
I was looking at doing something similar with my Asustor NAS. That is, supply the voltage, battery, charging circuit myself, and add one of those CH347 USB boards to provide I2C/GPIO etc and just have the charging circuit also provide a voltage good signal that software on the NAS could poll and use to shut down.
Nice. For the Pi5 running Pi OS, do you think using a GPIO pin to trigger a sudo shutdown command be graceful enough to prevent issues?
I think so. I would consider perhaps allowing a short time without power before doing that. To handle short cuts and brownouts.
So perhaps poll once per minute, if no power for more than 5 polls trigger a shutdown. Make sure you can provide power for at least twice as long as the grace period. You could be a bit more flash and measure the battery voltage and if it drops below a certain threshold send a more urgent shutdown on another gpio. But really if the batteries are good for 20mins+ then it should be quite safe to do it on a timer.
The logic could be a bit more nuanced, to handle multiple short power cuts in succession to shorten the grace period (since the batteries could be drained somewhat). But this is all icing on the cake I would say.
"sudo shutdown" gives you 60 seconds, "sudo shutdown now" does not, which is what I usually use. I'm thinking I could launch a script on startup that will check a pin every x seconds and run a shutdown command once it gets pulled low.
For me, raid 5 always has been great, but zfs it's just.. Better. Snapshots, scrubs, datasets.. I also like how you can export/import a pool super easily, and stuff. It's just better overall.
TIL. Looking into ZFS RAIDz1.