this post was submitted on 15 May 2024
63 points (97.0% liked)

Selfhosted

40296 readers
225 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hello, I'm relatively new to self-hosting and recently started using Unraid, which I find fantastic! I'm now considering upgrading my storage capacity by purchasing either an 8TB or 10TB hard drive. I'm exploring both new and used options to find the best deal. However, I've noticed that prices vary based on the specific category of hard drive (e.g., Seagate's IronWolf for NAS or Firecuda for gaming). I'm unsure about the significance of these different categories. Would using a gaming or surveillance hard drive impact the performance of my NAS setup?

Thanks for any tips and clarifications! 🌻

top 39 comments
sorted by: hot top controversial new old
[–] tburkhol@lemmy.world 17 points 6 months ago

I'm a big fan of Backblaze's failure statistics. https://www.backblaze.com/cloud-storage/resources/hard-drive-test-data

Annualized failure rates go from 0.3%/year to 3+%/year, even just looking at the drives they have million+ hours for, and I'd rather be at the lower end of that 10x range.

[–] BombOmOm@lemmy.world 14 points 6 months ago* (last edited 6 months ago) (2 children)

As you are looking for bulk data storage, the drive's speed isn't of too much concern. A 5400RPM drive is plenty.

If you are looking to put this drive into an array with other drives, make sure you get a CMR drive as SMR drives can drop out of arrays due to controllers finding them unresponsive. If a drive does not list it is CMR, it's best to assume it isn't. Seagate has a handy CMR chart, for example.

Additionally, if there are multiple spinning drives in the same enclosure, getting drives with vibration resistance is a good bonus. Most drives listed for NAS use will have this extra vibration resistance.

[–] meteokr@community.adiquaints.moe 4 points 6 months ago (1 children)

Is this for hardware RAID controllers, or have you experience software RAID like LVM or ZFS exhibiting the same drop out behavior? I personally haven't but it be nice to look out for future drives.

[–] BombOmOm@lemmy.world 6 points 6 months ago* (last edited 6 months ago) (2 children)

I have not personally experienced a dropout with a SMR drive. That is from the reporting I saw when WD was shipping out SMR drives in their Red (NAS) lineup and people were having all kinds of issues with them. According to the article (below), it sounds like ZFS has the worst time with them. WD also lost a class action suit over marketing these as NAS drives, while failing to disclose they were SMR drives (which don't work well in a NAS).

We want to be very clear: we agree with Seagate's Greg Belloni, who stated on the company's behalf that they "do not recommend SMR for NAS applications." At absolute best, SMR disks underperform significantly in comparison to CMR disks; at their worst, they can fall flat on their face so badly that they may be mistakenly detected as failed hardware. Source

[–] wreckedcarzz@lemmy.world 2 points 6 months ago* (last edited 6 months ago) (1 children)

I remember this - I had just bought my second drive for my nas (raid1, original drive cmr), and it was performing like shit. The next day, news broke about this bullshit and a couple days later, the suit was started. I was fucking pissed, the drives were still having trouble, with terabytes of irreplaceable data at risk while the two drives struggled to mirror. I got in contact with wd and after some back and forth bullshit, I straight-up threatened to join the class and blacklist wd for all my personal, family/friends, and client's builds, if they didn't rma the drive immediately and send me a cmr replacement. I've been 100% wd for over 20 years, and I have decent reach as to what I recommend and buy for people.

They sent me a cmr drive via express shipping. I continue to buy wd drives (two more disks in that machine, an external backup, an internal desktop pcie raid0 nvme+card, an internal backup drive for my desktop, a backup ssd for one of my laptops...), but with much more scrutiny. I did not join the class, but it's still a black mark in my book. I've been thinking about giving Toshiba a whirl, their drive reviews look good. Maybe next upgrade...

[–] acockworkorange@mander.xyz 2 points 6 months ago (1 children)

Purely for my edification, why didn’t you join the class action? It’s not like you weren’t affected or even that they had any redeeming behavior.

[–] wreckedcarzz@lemmy.world 1 points 6 months ago

I got what I wanted (a proper cmr drive of the same capacity and speed) and I wasn't terribly interested in like $8 that would show up a year later. I just wanted to have my data safe on the correct hardware, and for cs to recognize and remedy the issue.

Now if the array had failed and I'd lost data (which from what I've read, I was very lucky to not have that happen), absolutely. But I was just angry from being bait-and-switched, and I'm 'old school' where loyalty still means something. That's the only time I've had issues with wd; I've had drives fail, and there has been no argument, no question, and it's pretty rare/special circumstances (1kW psu went kaboom, for example). I value cs that just helps the customer, not grilling them for every detail to weasel out of a claim. So yeah they burned some goodwill, but I still have dropped ~2k on drives since then.

Right, I did hear about that lawsuit way back when, I just didn't know of these types of consequences. Very appreciated, especially the sources.

[–] Sunny@slrpnk.net 3 points 6 months ago

Thanks for this, will read up and check out the links!

[–] emptiestplace@lemmy.ml 11 points 6 months ago (2 children)

Yeah, you don't want a surveillance drive. They are optimized for continuous writes, not random IO.

It's probably worth familiarizing yourself with the difference between CMR and SMR drives.

If you expect this to keep growing, it might make sense to switch to SAS now - then you can find some really cheap enterprise class drives on ebay that will perform a bit better in this type of configuration. You'd just need a cheap HBA (like a 9211-8i) and a couple breakout cables. You can use SATA drives with a SAS HBA, but not the other way around.

[–] Sunny@slrpnk.net 1 points 6 months ago (1 children)

Thanks for the tips! I dought I'll be going higher that 50TB at max. Would SAS still be nessecary for that you reccon?

[–] emptiestplace@lemmy.ml 1 points 6 months ago (1 children)

Definitely isn't necessary, but if you search for '3.5" SAS lot' on ebay you might find all the drives you'll need to get to 50TB for the price of a couple new SATA drives.

[–] Sunny@slrpnk.net 2 points 6 months ago

I live in Scandinavia so ebay isn't much of an option for us really. Also prefer to use lesser big-corp when buying tech, while more expensive usually worth it due to better warranty and customer service. But thanks for the suggestion nonetheless!

[–] vegetaaaaaaa@lemmy.world 1 points 6 months ago

10000RPM SAS drives are noisy (and expensive), something to keep in mind. If I needed this kind of performance I would probably go full SSD.

[–] mbirth@lemmy.mbirth.uk 11 points 6 months ago

Apart from the SMR vs. CMR, if your NAS will run 24/7 you need to make sure to use 24/7 capable drives or find a way to flash a 24/7-specific firmware/setting to a consumer drive. Normal consumer drives (e.g. WD Green) tend to have a lot of energy saving features, e.g. they park the drive heads after a few seconds of inactivity. This isn’t a problem with normal use as an external drive that only gets connected once in a while. But in a 24/7 NAS the drive will wake up lots of times and park again, wake up, park again … and these cycles kill the drive pretty fast.

https://www.truenas.com/community/threads/hacking-wd-greens-and-reds-with-wdidle3-exe.18171/

[–] jo3shmoo@sh.itjust.works 8 points 6 months ago (2 children)

Lots of good advice here. I've got a bunch of older WD Reds still in service (from before the SMR BS). I've also had good luck shucking drives from external enclosures as well as decommissioned enterprise drives. If you go that route, depending on your enclosure or power supply in these scenarios you may run into issues with a live 3.3V SATA power pin causing drives to reboot. I've never had this issue on mine but it can be fixed with a little kapton tape or a modified SATA adapter. It's definitely cheaper to shuck or get used enterprise for capacity! I'm running at least a dozen shucked drives right now and they've been great for my needs.

Also, if you start reaching the point of going beyond the ports available on your motherboard, do yourself a favor and get a quality HBA card flashed in IT mode to connect your drives. The cheapo 4 port cards I originally tried would have random dropouts in Unraid from time to time. Once I got a good HBA it's been smooth sailing. It needs to be in IT mode to prevent hardware raid from kicking in so that Unraid can see the individual identifiers of the disks. You can flash it yourself or use an eBay seller like ThArtOfServer who will preflash them to IT mode.

Finally, be aware that expanding your array is a slippery slope. You start with 3 or 4 drives and next thing you know you have a rack and 15+ drive array.

[–] dmention7@lemm.ee 4 points 6 months ago (1 children)

On the power disable feature topic, I've only bought a few used enterprise drives from Goharddrive.com and Serverpartsdeals.com, but they both included a handy little SATA power adapter with each drive for exactly that reason.

The first desktop I installed them in worked just fine with the factory PSU cables, but when I upgraded I was left scratching my head for a few minutes until I remembered those adapters!

[–] ShepherdPie@midwest.social 2 points 6 months ago

I bought a small roll of kapton tape years ago and just use a sliver of it to cover the 3v3 pin.

[–] Sunny@slrpnk.net 1 points 6 months ago* (last edited 6 months ago)

Thanks for all the input and feedback - really appriciate it :) I still have quite the way to go to learn some of what these terms are. I have one PCI-e card for expanding the amount of Sata ports, wether is a cheapo card or not im not entierly sure(got it secondhand via a package deal), but have been using it for half a year now without any issues :)

[–] Max_P@lemmy.max-p.me 7 points 6 months ago

The concern for the specific disk technology is usually around the use case. For example, surveillance drives you expect to be able to continuously write to 24/7 but not at crazy high speeds, maybe you can expect slow seek times or whatever. Gaming drives I would assume are disposable and just good value for storage size as you can just redownload your steam games. A NAS drive will be a little bit more expensive because it's assumed to be for backups and data storage.

That said in all cases if you use them with proper redundancy like RAIDZ or RAID1 (bleh) it's kind of whatever, you just replace them as they die. They'll all do the same, just not with quite the same performance profile.

Things you can check are seek times / latency, throughput both on sequential and random access, and estimated lifespan.

I keep hearing good things about decomissioned HGST enterprise drives on eBay, they're really cheap.

[–] monkeyman512@lemmy.world 7 points 6 months ago

Other people have suggested good info to gain nuisanced knowledge. I recommend starting with a simple fact. With enough time and/or the right conditions all storage will fail. Design your setup with redundancy. I personally had to replace 2x 12tb drives this year. I have raidz3 (3 parity drives) and a hot spare. So I just bought cheap replacements from a reputable seller on eBay and consider it part of the cost of self hosting.

[–] jet@hackertalks.com 7 points 6 months ago

It depends what your parameters are. For spinning hard disks, you want to look at total power cycles, and mean time between failures. More enterprise drives have very long mean time between failures

In fact for spinnig hard disks, turning on can be on a likely failure mode, so there's machines out there if you power off there's a good chance they won't come back on in the enterprise data centers

Your solid state hard disks, you want to look at meantime between failures, but also total write volume. Enterprise discs tend to have much much much much much much greater write capacity

So all of these trade-offs cost money, if you're looking at archival, where you write the data only once, then you can go with a disk that has a low total write volume

[–] NeoNachtwaechter@lemmy.world 6 points 6 months ago* (last edited 6 months ago) (1 children)

Read about the specific features of the "WD RED" drives. There are some pretty good articles out there, and you are going to learn a whole lot reagarding your question.

I got a bunch of them in my private server. I didn't know all these details when I bought them LOL, but they do a good job, reliable, silent, for 6 years and counting.

[–] Sunny@slrpnk.net 2 points 6 months ago

Thanks for the tip 🌻

[–] Exulion@lemmy.world 5 points 6 months ago (1 children)

Also remember that your parity has to be more or equal to the biggest drive in your array. If you buy a 10 but don't have another 10 you must use the 10 as parity.

[–] Sunny@slrpnk.net 3 points 6 months ago

That is indeed my current issue haha, was not aware of that when I got into this; so currently have 10TB in parity and only use 3TB for my storage... So wanna get most of out of that parity by buying another disk.

[–] blackstrat@lemmy.fwgx.uk 5 points 6 months ago (1 children)

I highly recommend watching this guys videos on his analysis of the backblaze data https://www.youtube.com/watch?v=IgJ6YolLxYE&t=1

And a comparison of the difference WD drive colours, which might not be what you expect https://www.youtube.com/watch?v=QDyqNry_mDo&t=2

[–] PipedLinkBot@feddit.rocks 2 points 6 months ago

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=IgJ6YolLxYE&t=1

https://www.piped.video/watch?v=QDyqNry_mDo&t=2

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] Decronym@lemmy.decronym.xyz 4 points 6 months ago* (last edited 6 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
PSU Power Supply Unit
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

[Thread #750 for this sub, first seen 15th May 2024, 23:25] [FAQ] [Full list] [Contact] [Source code]

[–] avidamoeba@lemmy.ca 3 points 6 months ago (2 children)

Yes three are differences but you're running a redundant array of independent disks in order not to care about those differences.

[–] ShepherdPie@midwest.social 3 points 6 months ago (1 children)

I think this really depends on what you're storing. I have a large media collection and doing full redundancy would be extremely wasteful, but it's fairly easy to repopulate things if something goes awry. If it's irreplaceable or smaller files, redundancy definitely makes sense.

[–] avidamoeba@lemmy.ca 2 points 6 months ago* (last edited 6 months ago)

Sure but technically non-redundant schemes also fall under the category. E.g. RAID0, multiple non-redundant ZFS vdevs, etc. Those would be reducing the performance effects of single disks.

[–] Sunny@slrpnk.net 1 points 6 months ago

Wasn't sure if that mattered or not in the case of Unraid. Had a feeling that only counted for the size of the disk. Just trying to make sure im not buying an expensive 10TB that I wont be able to use :P

[–] Perrin42@fedia.io 3 points 6 months ago
[–] pe1uca@lemmy.pe1uca.dev 2 points 6 months ago (1 children)

I've read advice against buying used storage unless you don't mind being at more risk of losing the data in there.

[–] Sunny@slrpnk.net 1 points 6 months ago

I'll buy it used if it has been properly checked by another vendor before sold again. Otherwise I wont.

[–] TheHolm@aussie.zone 2 points 6 months ago

Yes, it will. Will it make any difference for you, depends of what are you doing. I would not use surveillance drive in to server, they are way too specific. Outside of that prices is pretty much same per TB/(Warranty Year) accross the board.

I done some excessive research couple of years back on the topic. you can find it here https://blog.holms.place/2022/05/01/hdd-storage-cost-comparation-may-2022.html. I do not think situation have changed match since than. Price per TB/Year is nearly constant past 8GB size.

Also consider looking to re-certified drives, or even refurbished drives. you may save hips on them. But it depends on how much you value your data, how much redundancy in you storage pool and how good your backup strategy.

[–] BuoyantCitrus@lemmy.ca 2 points 6 months ago (1 children)

One thing that would be useful to understand is the distinction between CMR and SMR

[–] Sunny@slrpnk.net 2 points 6 months ago

Thanks, have not heard these terms before so will be reading up :)