this post was submitted on 26 Feb 2026
160 points (98.8% liked)

Selfhosted

57238 readers
468 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I have a 56 TB local Unraid NAS that is parity protected against single drive failure, and while I think a single drive failing and being parity recovered covers data loss 95% of the time, I'm always concerned about two drives failing or a site-/system-wide disaster that takes out the whole NAS.

For other larger local hosters who are smarter and more prepared, what do you do? Do you sync it off site? How do you deal with cost and bandwidth needs if so? What other backup strategies do you use?

(Sorry if this standard scenario has been discussed - searching didn't turn up anything.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] Brkdncr@lemmy.world 15 points 1 week ago (2 children)

Backup to 2nd nas.

Important stuff gets backed up to cloud storage. Whatever is cheapest.

In my case Synology c2 cloud was cheapest.

load more comments (2 replies)
[–] irmadlad@lemmy.world 11 points 1 week ago (2 children)

I'm not sure if I qualify as a 'larger local hoster' but I would go through your 20 TB and decide what really is important enough to backup in case the wheels fall off. Linux ISOs, those can be re-downloaded, although it would take a bit of time. The things that can't be readily downloaded such as my music collection that I have been accumulating for decades, converted to flac, and meticulously tagged, can't be re-downloaded. So that is one of my priorities to back up. Pictures, business documents, personal documents, can't be re-downloaded, so that goes on the 'must back up' list....and so on. Just cull out what is and isn't replaceable. I would bet that once you do that, your 20 TB will be a bit more slim, and you're not trying to push 20TB up the pipe to a cloud backup.

I use BackBlaze's Personal, unlimited tier for $99 USD per year, which is a pretty sweet deal. One thing about Backblaze to remember is that the drives being backed up must be physically connected to the PC doing the backup/uploading. I get around that because I have a hot swap bay on my main PC, but there are other methods and software that will masquerade your NAS or other as a physically connected drive.

[–] cmnybo@discuss.tchncs.de 3 points 1 week ago (2 children)

Backblaze personal doesn't support Linux or BSD, so it would be useless for a NAS.

load more comments (2 replies)
[–] countstex@feddit.dk 2 points 1 week ago (2 children)

I use backblaze too, started with the personal back up, but swapped to the B2 solution as it was supported by my NAS. The cost of the actual storage isn't much, most of the cost is in access, so for data that doesn't alter much it worked out just as cheap, and easier to do things that way.

[–] irmadlad@lemmy.world 2 points 1 week ago

and easier to do things that way.

I'm cheap and my labor is free. LOL But you do have a point.

load more comments (1 replies)
[–] worhui@lemmy.world 10 points 1 week ago* (last edited 1 week ago)

Lto tape. But I only have 15tb

It quickly becomes cost effective when you actually need the data to be safe. Far easier to have off site backups. I have never had a problem , but I like to have offline backup. Most of the time my data is static. So I am only backing up projects files ans changes for the most part.

If you have 40+ tb of dynamic data I can’t help there.

Edit: I buy used drives that are usually 2 generations old, so I got lto-5 drives when lto 7 was new. The used drives may be less reliable but used drives can be 1/10th the price of the newest ones.

[–] MentalEdge@sopuli.xyz 9 points 1 week ago* (last edited 1 week ago) (1 children)

Recently helped someone get set up with backblaze B2 using Kopia, which turned out fairly affordable. It compresses and de-duplicates leading to very little storage use, and it encrypts so that Backblaze can't read the data.

Kopia connects to it directly. To restore, you just install Kopia again and enter the same connection credentials to access the backup repository.

My personal solution is a second NAS off-site, which periodically wakes up and connects to mine via VPN, during that window Kopia is set to update my backups.

Kopia figures out what parts of the filesystem has changed very quickly, and only those changes are transferred over during each update.

[–] NekoKoneko@lemmy.world 4 points 1 week ago (2 children)

The Backblaze option is something I've seriously considered.

Any reason this person didn't go with the $99/year personal backup plan? It says "unlimited" and it is for my household only, but maybe I'm missing something about how difficult it is to setup on Unraid or other NAS software. B2's $6/TB/mo rate would put me at $150/mo which is not great.

[–] Scrollone@feddit.it 4 points 1 week ago

You can't use the $99/year plan for that. The authorized client only works as a desktop application on Windows and MacOS.

[–] MentalEdge@sopuli.xyz 3 points 1 week ago (20 children)

They only needed about 500GB.

And personal is for desktop systems. You have to use Backblazes macOS/Windows desktop application, and the setup is not zero-knowledge on Backblazes part. They literally advertise being able to ship you your files on a physical device if need be.

Which some people are ok with, but not what most of us would want.

load more comments (20 replies)

entire nas (~24TB used) is replicated to another nas in another building (2 actually). i like having 3 copies.

[–] billwashere@lemmy.world 6 points 1 week ago (1 children)
[–] Cyber@feddit.uk 4 points 1 week ago (1 children)
load more comments (1 replies)
[–] randombullet@programming.dev 6 points 1 week ago (3 children)

I have 3 main NASes

78TB (52TB usable) hot storage. ZFS1

160TB (120TB) warm storage ZFS2

48TB (24TB) off site. ZFS mirror

I rsync every day from hot to off site.

And once a month I turn on my warm storage and sync it.

Warm and hot storage is at the same location.

Off site storage is with a family friend who I trust. Data isn't encrypted aside from in transit. That's something else I'd like to mess with later.

Core vital data is sprinkled around different continents with about 10TB. I have 2 nodes in 2 countries for vital data. These are with family.

I think I have 5 total servers.

Cost is a lot obviously, but pieced together over several years.

The world will end before my data gets destroyed.

load more comments (3 replies)
[–] Decronym@lemmy.decronym.xyz 5 points 1 week ago* (last edited 3 days ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
HTTP Hypertext Transfer Protocol, the Web
HTTPS HTTP over SSL
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
SSD Solid State Drive mass storage
SSL Secure Sockets Layer, for transparent encryption
VNC Virtual Network Computing for remote desktop access
VPN Virtual Private Network
ZFS Solaris/Linux filesystem focusing on data integrity

[Thread #119 for this comm, first seen 26th Feb 2026, 15:51] [FAQ] [Full list] [Contact] [Source code]

[–] danielquinn@lemmy.ca 5 points 1 week ago (1 children)

Honestly, I'd buy 6 external 20tb drives and make 2 copies of your data on it (3 drives each) and then leave them somewhere-safe-but-not-at-home. If you have friends or family able to store them, that'd do, but also a safety deposit box is good.

If you want to make frequent updates to your backups, you could patch them into a Raspberry Pi and put it on Tailscale, then just rsync changes every regularly. Of course means that wherever youre storing the backup needs room for such a setup.

I often wonder why there isn't a sort of collective backup sharing thing going on amongst self hosters. A sort of "I'll host your backups if you host mine" sort of thing. Better than paying a cloud provider at any rate.

[–] Joelk111@lemmy.world 4 points 1 week ago* (last edited 1 week ago) (3 children)

That NAS software company Linus (of Linus Tech Tips) funded has a feature for this planned I think.

An open-source standalone implementation would be dope as hell. Sure, it'd mean you'd need to double your NAS capacity (as you'd have to provide enough storage as you use), but that's way easier than building a second NAS and storing/maintaining it somewhere else or constantly paying for and managing a cloud backup.

load more comments (3 replies)
[–] Cyber@feddit.uk 5 points 1 week ago (3 children)

What's your recovery needs?

It's ok to take 6 months to backup to a cloud provider, but do you need all your data to be recovered in a short period of time? If so, cloud isn't the solution, you'd need a duplicate set of drives nearby (but not close enough for the same flood, fire, etc.

But, if you're ok waiting for the data to download again (and check the storage provider costs for that specific scenario), then your main factor is how much data changes after that initial 1st upload.

load more comments (3 replies)
[–] kaotic@lemmy.world 5 points 1 week ago (2 children)

Backblaze offers unlimited data on a single computer, $99/year.

There might be some fine print that excludes your setup but might be worth investigating.

https://www.backblaze.com/cloud-backup/pricing

[–] unit327@lemmy.zip 2 points 1 week ago (2 children)
[–] Joelk111@lemmy.world 3 points 1 week ago (1 children)

Yeah, people have done workarounds and stuff to get their entire NAS backed up but those seemed sketchy and bad when I looked into it.

load more comments (1 replies)
[–] irmadlad@lemmy.world 3 points 1 week ago

Wine or there is a Docker container that runs the Backblaze client.

load more comments (1 replies)
[–] Treczoks@lemmy.world 5 points 1 week ago

As someone who has experienced double failure twice in my lifetime, I seriously recommend doing backups.

The problem is that the only serious backup solution is another HDD for this size. A robot array for tapes or worm drives is probably out of budget.

[–] Bishma@discuss.tchncs.de 5 points 1 week ago* (last edited 1 week ago)

Like others, I have a 2 tier system.

About 2TB of my (Synology) NAS is critical files. Those get sent via Hyperbackup to cloud storage on at least a weekly basis, some daily. I have them broken up into multiple tasks with staggered schedules so it never has much to do on any given day.

The other 16TB I have get sync'd (again with hyperbackup, but not a scheduled backup task) to a 20TB external drive roughly once per quarter. Then that drive lives on the closet of a family member.

[–] unit327@lemmy.zip 4 points 1 week ago* (last edited 1 week ago) (5 children)

I use aws s3 deep archive storage class, $0.001 per GB per month. But your upload bandwidth really matters in this case, I only have a subset of the most important things backed up this way otherwise it would take months just to upload a single backup. Using rclone sync instead of just uploading the whole thing each time helps but you still have to get that first upload done somehow...

I have complicated system where:

  • borgmatic backups happen daily, locally
  • those backups are stored on a btrfs subvolume
  • a python script will make a read-only snapshot of that volume once a week
  • the snapshot is synced to s3 using rclone with --checksum --no-update-modtime
  • once the upload is complete the btrfs snapshot is deleted

I've also set up encryption in rclone so that all the data is encrypted an unreadable by aws.

load more comments (5 replies)
[–] Yorick@piefed.social 4 points 1 week ago

I have 2 500GB SSDs in RAID1 for important data, truenas apps etc..., then 32TB total in RAIDZ1 for large Dataset that won't need speed (movies, TV show, music, pictures, archives,...)

If I have a complete NAS failure, a remote backup (via rsync to a friend's NAS Weekly) of the SSD and bootable drive can be used in a new system, and my torrent app has the list and magnet of all torrents stored on the SSD so it can re-download them.

[–] Konraddo@lemmy.world 3 points 1 week ago

Similar to most responses, I backup whatever I created myself, not shared by someone or downloaded from somewhere. I care about pictures that I took, documents, financial records, etc, which don't take up much space at all.

[–] quick_snail@feddit.nl 2 points 1 week ago

Tape or backblaze

[–] iamthetot@piefed.ca 2 points 1 week ago

The stuff that I actually care about are automatically backed up twice, once to a simple external on site and once to a cloud. The cloud rotates between the most recent backups so it never takes up more than 1tb compressed, while the local external keeps backups for much longer (something like 6tb at a time).

[–] OR3X@lemmy.world 2 points 1 week ago (1 children)

So you have 56TB of total storage, but how much of that 56TB is actually used? Take the amount of storage used and add 10-12% to that figure. Now you create a new NAS (preferably off-site) with that amount of storage and that becomes your backup target. Take an initial backup (locally if possible to speed up the process) and then you can use something like rsync to create incremental backups going forward. This is the method I've used and so far it has worked out well. I target 10-12% more than the amount of used storage for my backup capacity because my storage use grows reasonably slowly. If your usage grows faster you might want to increase your "buffer" a little more so that you're not having to constantly add drives to your backup target.

[–] NekoKoneko@lemmy.world 2 points 1 week ago (1 children)

Yeah, this is certainly a viable "brute-force"-ish ooption. While I have 56, I'm only using 26 or so. But I'd actually be hesitant to do anything less than a full capacity mirror because I do expect to eventually use this (and more - adding drives to Unraid).

I've balked because of cost and upkeep (maintaining the same capacity, additional chances for drive failure, two separate sites I need physical access to with a high bandwidth connection), so I admit I was hoping I was missing an easier option.

[–] OR3X@lemmy.world 5 points 1 week ago

I mean, if you want a full mirror, rolling your own backup target is going to be the cheapest option even with the current high price of hardware. Other options are cloud storage, or using another media like tape. Cloud storage is of course an on going cost which rules it out for me, not to mention privacy concerns. There are certain "cold storage" options from cloud storage hosts which are considerably cheaper but they have limitations on how the data can be accessed and how often. The tape route is possible but it's not really viable for home use due to the high upfront cost of the drives. Outside of that, backing up a subset of your storage as others have suggested is the only other option. Creating viable backups without breaking the bank is a challenge as old as computers, unfortunately.

load more comments
view more: ‹ prev next ›