this post was submitted on 29 Nov 2024
21 points (88.9% liked)

Selfhosted

40474 readers
381 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hi!

I used to have three raid1:

2 x 4Tb Ssd dedicated to store personal data

2 x 6Tb HDD dedicated to store "iso's", the eye patched ones.

2 x 4Tb ssd for backup.

Ext4 everywhere.

I run this setup for years, maybe even 20 years (with many different disks and sizes over time).

I decided that was time to be more efficient.

I removed the two HDD, saving quite a lot of power, and switched the four sdd to RAID5, Then put BTRFS on top of that. Please note, I am not using btrfs raid feature, but Linux mdadm software raid (which I have been using rock solid for years) with btrfs on top as if on a single drive.

I choose MD not only for my past very positive experience, but specially because I love how easy is to recover and restore from many issues.

I choose not to try zfs because I don't feel comfortable in using out of kernel drivers and I dislike how zfs seems to be RAM hungry.

What do you guys think?

top 14 comments
sorted by: hot top controversial new old
[–] solrize@lemmy.world 9 points 5 days ago (2 children)

SSDs for backup? Being rich must be nice. More srsly if you have the upstream pipe for it, remote backups are preferable in case something happens at home.

[–] Shimitar@feddit.it 2 points 4 days ago (1 children)

Yes I follow the 3 2 1 rule, one local backup on the hdd, one on another disk at home (connected to a openwrt router) and one offsite on my vps.

I was using ssd for backup because I was dumb... I guess...

Because needed extra space and added mindlessly the hdd without realizing I should have moved to a more efficient approach.

[–] Deckweiss@lemmy.world 1 points 4 days ago* (last edited 4 days ago) (2 children)

borgbackup has great versioning, reduplication and compression

[–] abeorch@lemmy.ml 4 points 4 days ago

I saw something about Restic as an alternative to Borgbackup when I was looking around at what to so about backup https://github.com/restic/restic/

[–] Shimitar@feddit.it 1 points 4 days ago

I use restic and backrest

[–] jlh@lemmy.jlh.name 2 points 5 days ago (1 children)

Right, something like hetzner storage box is a good complement to raid 5 in order to follow the 321 backup rule. You can use rclone to sync your backup to hetzner, and even encrypt it, and they can do automatic snapshots on their end to protect against ransomware.

[–] Shimitar@feddit.it 2 points 4 days ago

I do my offsite on a vps I rent, all managed by restic / backrest.

[–] jlh@lemmy.jlh.name 2 points 5 days ago (1 children)

Looks like a good setup to me. Hdds have a lot of downsides, so if you can afford the extra $20/TB, an all flash array is super useful. Mdadm is rock solid.

The only issue I think is that it's not possible to expand this array like you can on LVM or ZFS, so just watch out for that.

[–] Shimitar@feddit.it 1 points 4 days ago

Good point on the expansion. But o am not too bothered about it, as I have always done by moving data around. Takes a while, but leaves you with a set of disks with the old data still there, and it saved my ass a few times in the past. Now I should be fine with good backups, but you never know.

[–] poVoq@slrpnk.net 1 points 4 days ago* (last edited 4 days ago) (1 children)

Btrfs on a single storage prevents it from doing auto correction via checksums. I would get rid of the raid5 and do a btrfs raid1 out of these devices. Makes it also easier to swap out devices or expand the raid as btrfs supports arbitrary sizes.

[–] Shimitar@feddit.it 1 points 4 days ago (1 children)

The point was to her 12tb instead of 8tb... Raid1 would negate that...

What do you mean auto correction via checksum?

[–] poVoq@slrpnk.net 3 points 4 days ago (1 children)

One of the main features of file systems like btrfs or ZFS is to store a checksum of each file and allow you to compare that to the current file to notice files becoming corrupted. With a single drive all btrfs can do then is to inform you that the checksum doesn't match to the file any longer and that it is thus likely corrupted, but on a btrfs raid it can look at one of the still correct duplicates and heal the file from that.

IMHO the little extra space from mdadm RAID5 is not worth the much reduced flexibility in future drive composition compared to a native btrfs raid1.

[–] Shimitar@feddit.it 1 points 4 days ago (1 children)

Interesting feature, but how often is that needed? O I never had such issue ever in my life.

Is it something that can happen on such filesystems? Then I guess ext4 is far superior (/s).

Jokes aside, that extra space with raid5 if by far more tangible and statistically sound than the bit flipping fearmonger.

Which never happened to me so far.

Frankly, if a pixel of a photo or a piece of a scene of a movie is corrupted, I will just never know, and never even notice. Nor a corrupted bit in a text note would be critical.

I cannot think of any unreplaceable file of mine that would actually suffer from bit rot issue.

Can you? Would be interested in which kind.

[–] poVoq@slrpnk.net 2 points 4 days ago* (last edited 4 days ago)

It nearly certainly happened to you, but you are simply not aware as filesystems like ext4 are completely oblivious to it happening and for example larger video formats are relatively robust to small file corruptions.

And no, this doesn't only happen due to random bit flips. There are many reasons for files becoming corrupted and it often happens on older drives that are nearing the end of their life-span and good management of such errors can expand the secure use of older drives significantly. And it can also help mitigate the use of non-ECC memory to some extend.

Edit: And my comment regarding mdadm Raid5 was about it requiring equal sized drives and not being able to shrink or expand the size and number of drives on the fly, like it is possible with btrfs raids.