Found the Canadian.
avidamoeba
It might also save it from shit controllers and cables which ECC can't help with. (It has for me)
Unless you need RAID 5/6, which doesn’t work well on btrfs
Yes. Because they're already using some sort of parity RAID so I assume they'd use RAID in ZFS/Btrfs and as you said, that's not an option for Btrfs. So LVMRAID + Btrfs is the alternative. LVMRAID because it's simpler to use than mdraid + LVM and the implementation is still mdraid under the covers.
It is marketing and it does have meaningful connection to the litho features, but the connection is not absolute. For example Samsung's 5nm is noticeably more power hungry than TSMC's 5nm.
And you probably know that sync writes will shred NAND while async writes are not that bad.
This doesn't make sense. SSD controllers have been able to handle any write amplification under any load since SandForce 2.
Also most of the argument around speed doesn't make sense other than DC-grade SSDs being expected to be faster in sustained random loads. But we know how fast consumer SSDs are. We know their sequential and random performance, including sustained performance - under constant load. There are plenty benchmarks out there for most popular models. They'll be as fast as those benchmarks on average. If that's enough for the person's use case, it's enough. And they'll handle as many TB of writes as advertised and the amount of writes can be monitored through SMART.
And why would ZFS be any different than any other similar FS/storage system in regards to random writes? I'm not aware of ZFS generating more IO than needed. If that were the case, it would manifest in lower performance compared to other similar systems. When in fact ZFS is often faster. I think SSD performance characteristics are independent from ZFS.
Also OP is talking about HDDs, so not even sure where the ZFS on SSDs discussion is coming from.
Doesn't uBlock Origin already have a Manifest V3 version of the extension?
To add a concrete example to this, I worked at a bank during a migration from a VMware operated private cloud (own data center) to OpenStack. In several years, the OpenStack cloud got designed, operationalised, tested and ready for production. In the following years some workloads moved to OpenStack. Most didn't. 6 years after the beginning of the whole hullabaloo the bank cancelled the migration program and decided they'll keep the VMware infrastructure intact and upgrade it. They began phasing out OpenStack. If you're in North America, you know this bank. Broadcom can probably extract 1000% price increase and still run that DC in a decade.
Why would MS not use this opportunity to also hike the prices of their equivalent offerings? 1000% increase leaves a lot of room for an increase while still being cheaper.
Not sure where you're getting that. Been running ZFS for 5 years now on bottom of the barrel consumer drives - shucked drives and old drives. I have used 7 shucked drives total. One has died during a physical move. The remaining 6 are still in use in my primary server. Oh and the speed is superb. The current RAIDz2 composed of the shucked 6 and 2 IronWolfs does 1.3GB/s sequential reads and write IOPS at 4K in the thousands. Oh and this is all happening on USB in 2x 4-bay USB DAS enclosures.
That doesn't sound right. Also random writes don't kill SSDs. Total writes do and you can see how much has been written to an SSD in its SMART values. I've used SSDs for swap memory for years without any breaking. Heavily used swap for running VMs and software builds. Their total bytes written counters were increasing steadily but haven't reached the limit and haven't died despite the sustained random writes load. One was an Intel MacBook onboard SSD. Another was a random Toshiba OEM NVMe. Another was a Samsung OEM NVMe.
I was pretty surprised to learn that Interac e-transfer or equivalent isn't commonplace everywhere.