30/32 = 0.938
That’s less than a single terabyte. I have a microSD card bigger than that!
;)
This is a most excellent place for technology news and articles.
30/32 = 0.938
That’s less than a single terabyte. I have a microSD card bigger than that!
;)
I can't wait for datacenters to decommission these so I can actually afford an array of them on the second-hand market.
Exactly, my nas is currently made up of decommissioned 18tb exos. Great deal and I can usually still get them rma’d the handful of times they fail
Nice, where do you get yours?
radarr goes brrrrrr
sonarr goes brrrrrr…
barrrr?
...dum tss!
The two models, [...] each offer a minimum of 3TB per disk
Huh? The hell is this supposed to mean? Are they talking about the internal platters?
My first HDD had a capacity of 42MB. Still a short way to go until factor 10⁶.
My first HD was a 20mb mfm drive :). Be right back, need some “just for men” for my beard (kidding, I’m proud of it).
So was mine, but the controller thought it was 10mb so had to load a device driver to access the full size.
Was fine until a friend defragged it and the driver moved out of the first 10mb. Thereafter had to keep a 360kb 5¼" drive to boot from.
That was in an XT.
it honestly could have been a 10mb, I don't even remember. only thing I really do remember is thinking it was interesting how it used the floppy and second cable, and how the sound it made was used in every 90's and early 2000's tv and movie show as generic computer noise :)
You have me beat on the XT, mine was a 286, although it did replace an Apple 2e (granted both were aquired several years after they were already considered junk in the 386 era).
Was fine until a friend defragged it and the driver moved out of the first 10mb
Oh noooo 😭
This is for cold and archival storage right?
I couldn't imagine seek times on any disk that large. Or rebuild times....yikes.
Random access times are probably similar to smaller drives but writing the whole drive is going to be slow
Definitely not for either of those. Can get way better density from magnetic tape.
They say they got the increased capacity by increasing storage density, so the head shouldn't have to move much further to read data.
You'll get further putting a cache drive in front of your HDD regardless, so it's vaguely moot.
For a full 32GB at the max sustained speed(275MB/s), 32ish hours to transfer a full amount, 36 if you assume 250MB/s the whole run. Probably optimistic. CPU overhead could slow that down in a rebuild. That said in a RAID5 of 5 disks, that is a transfer speed of about 1GB/s if you assume not getting close to the max transfer rate. For a small business or home NAS that would be plenty unless you are running greater than 10GiBit ethernet.
I thought I read somewhere that larger drives had a higher chance of failure. Quick look around and that seems to be untrue relative to newer drives.
One problem is that larger drives take longer to rebuild the RAID array when one drive needs replacing. You're sitting there for days hoping that no other drive fails while the process goes. Current SATA and SAS standards are as fast as spinning platters could possibly go; making them go even faster won't help anything.
There was some debate among storage engineers if they even want drives bigger than 20TB. The potential risk of data loss during a rebuild is worth trading off density. That will probably be true until SSDs are closer to the price per TB of spinning platters (not necessarily the same; possibly more like double the price).
If you're writing 100 MB/s, it'll still take 300,000 seconds to write 30TB. 300,000 seconds is 5,000 minutes, or 83.3 hours, or about 3.5 days. In some contexts, that can be considered a long time to be exposed to risk of some other hardware failure.
Yep. It’s a little nerve wracking when I replace a RAID drie in our NAS, but I do it before there’s a problem with a drive. I can mount the old one back in, or try another new drive. I’ve only ever had one new DOA, here’s hoping those stay few and far between.
What happened to using different kinds of drives in every mirrored pair? Not best practice any more? I've had Seagates fail one after another and the RAID was intact because I paired them with WD.
You can, but you might still be sweating bullets while waiting for the rebuild to finish.
Just one would be a great backup, but I’m not ready to run a server with 30TB drives.
I'm here for it. The 8 disc server is normally a great form factor for size, data density and redundancy with raid6/raidz2.
This would net around 180TB in that form factor. Thats would go a long way for a long while.