avidamoeba

joined 1 year ago
[โ€“] avidamoeba@lemmy.ca 3 points 9 months ago

Can't be Gamble since it's trying to reduce losses, not incur them. ๐Ÿคญ

[โ€“] avidamoeba@lemmy.ca 1 points 9 months ago* (last edited 9 months ago)

That's probably not going through mmWave signal but the sub-mmWave spectrum we've previously used for 4G. I've hit >400Mbps on good ol' LTE back in 2018 on AWS.

The most widely used form of 5G, sub-6 GHz 5G (mid-band), is capable of delivering data rates ranging from 10 to 1,000 megabits per second (Mbps) [1]

According to this, 5G should be able to do 1Tb on sub-mmWave.

[1] https://en.wikipedia.org/wiki/5G?wprov=sfla1

[โ€“] avidamoeba@lemmy.ca 0 points 9 months ago (3 children)
[โ€“] avidamoeba@lemmy.ca 4 points 9 months ago

I've only just bootstrapped it once for testing. I used the docker setup and it was trivial.

[โ€“] avidamoeba@lemmy.ca 2 points 9 months ago

BTW, you can somewhat mitigate the spyware by using Shelter.

[โ€“] avidamoeba@lemmy.ca 7 points 9 months ago* (last edited 9 months ago) (2 children)

mmWave 5G can have much worse signal strength than 5G running in the 700-2400MHz spectrum. Its range is much shorter and it gets disturbed by air particles like rain. With that said, I don't know if one could get only mmWave reception or whether mmWave can be used only in addition to 700-2400MHz for speed augmentation. ๐Ÿค”

[โ€“] avidamoeba@lemmy.ca 1 points 9 months ago* (last edited 9 months ago)

Hahaha. Good one!

Well not quite. More like "USB connected drives in RAID could be less reliable than internal and software can deal with it. ZFS makes that easier than LVM+mdraid." The downside of LVM+mdraid in my experience is that it needs more commands typed in to repair an array if something's gone wrong. It probably doesn't break much more than ZFS would under the same hardware conditions and it probably can recover from the same conditions ZFS could. USB drives can present more failure modes than internal but one of the points of RAID is to mitigate hardware failures. So I'm considering USB drives as just shittier drivers whose shittiness the software should be able to hide. So far that has been borne out in practice in my anecdata. I've used both LVMRAID (LVM+built-in mdraid) and ZFS with questionable USB drives and both have handled them without data loss and rare downtime, less than once a year. ZFS requires less attention. With all of that said ZFS does of course provide data integrity checking and correction which is a significant plus over LVM+mdraid. It's already saved me from data corruption due to RAM I had no idea had a problem. RAM that passed Memtest86+'s first pass. Little did I know that it fails on subsequent passes... Yes the first and subsequent passes are different. So I'd use ZFS with USB or internal disks whenever I have the choice to. ๐Ÿ˜‚

[โ€“] avidamoeba@lemmy.ca 2 points 9 months ago

Apparently I've heatsinked the MyBook as well. This is what that looks like without the cover. The heatsink is from a Raspberry Pi kit. If I were you, knowing what I know, I'd just slap heatsinks on the 12T disks preventatively instead of testing them.

[โ€“] avidamoeba@lemmy.ca 2 points 9 months ago* (last edited 9 months ago) (2 children)

If there's an offline backup, they could create a degraded RAIDz1 with the 2 12T disks, copy the data from the 6Ts over, create the 12T linear volume out of the 6Ts, add it to the degraded RAIDz1 and wait for it to resilver. If no hardware fails and they don't punch in a wrong keystroke, it should work.

Most USB enclosures and adapters aren't designed for 24/7 connectivity

This is true. I'm using 8 external USB drives in two RAIDz1s and I had to ensure their controllers don't overheat. For example I've had 4 WD Elements standing vertically, stacked next to each other. The inner two's controllers would overheat during initial data transfer and disconnect. Spacing then apart resolved this for my ambient environment. In the other pool, I had a new WD Elements overheat on its own, without taking ambient heat. I resolved that by adhering a small heatsink to the SATA-USB controller in the enclosure. I also drilled a hole in the enclosure immediately above the heatsink for better ventilation. I later applied this mod to the of the drives of the same model.

Mixing USB drives with any ZFS pool is a recipe for headache IMO.

Crucially however between the issues above and accidental cable unplugging, ZFS hasn't lost any data or caused any undue headache. If anything, getting back to a working state has been easier in some occasions as it would automatically detect a missing drive is back, resilver if needed and go its merry way. The headache I've observed most of the time has been of the sort - a message that zpool is not healthy, a drive has shown errors and/or missing, resolve drive issues if any, reconnect drive, no affected applications, no downtime. The much less often observed issue, probably twice over the last 5 years has been of the sort - applications are down, zpool isn't reading/writing or is missing, more than one drives is disconnected due to a cable snafu, shutdown, reconnect drives, boot, ZFS detects the drives and it proceeds as if nothing happened. All in, the occasions on which I had to manipulate ZFS over that last 5 years is around 5, most during the initial data transfer load. The previous LVM + mdraid setup I had required more work to get back in shape after a drive was kicked out for one reason or another. So yes USB can definitely present issues that you wouldn't see in an internal application, especially if some of your USB enclosure controllers are shit, but in my anecdotal experience, ZFS is very capable in handling those gracefully and with less manual intervention than the standard Linux solutions. If anything, ZFS has been less sensitive to hardware problems.

[โ€“] avidamoeba@lemmy.ca 4 points 9 months ago* (last edited 9 months ago) (1 children)

ZFS is fine with external disks and indirect access to hardware in my limited experience. Performance would not be as optimal as it could be but data integrity shouldn't be a problem. If I didn't need the space and this was my primary storage array, I'd probably opt for the increased reliability of 2 mirror vdevs. I've done something similar to what OP is suggesting with LVM combining multiple disks on my off-site backup though. I combined 1T+3T+4T disks into a single 8TB volume. Deliciously bastardized, not a single integrity issue, no hardware failures either over the several years it ran.

[โ€“] avidamoeba@lemmy.ca 2 points 9 months ago* (last edited 9 months ago) (2 children)

The simplest way to do exactly what you want is to use LVM to create a linear volume (equivalent to JBOD) from the two 6TB disks. Then create a zpool with a single RAIDz1 vdev with that along with the other 2 12TB disks. You could use mdraid to do a RAID0 as you suggested too. The result would be similar. In fact that could be easier for you if you already know how to use mdraid and you don't LVM.

You could also do it all with ZFS albeit with more lost space. You could create a zpool with 2 vdevs. One with a 6TB mirror comprised of the 2 6TB drives. The other a 12TB mirror. The redundancy in ZFS is at the vdev level. A zpool contains one or more vdevs. It combines their space like a JBOD. You can mix and match the size and type of the vdevs. You can have mirrors with RAIDz, just mirrors, just RAIDz, etc. My suggestion for having two mirrors, 6TB and 12TB results in 18TB usable space. This is straightforward, easy to manage and easy to expand. You just add another vdev to the pool with whatever topology you like. If you want to maximize the space with what you got, you can do your idea instead. It's got a bit more setup and a bit less redundancy but it'll work fine.

EDIT: Ensure your external drives work reliably under load for extended periods! Either load test them with something and/or watch for errors while transferring data to the new zpool. If you see an error, check dmesg for anything related to a USB drive. If you encounter such a problem it might be a controller bug, controller overheating or a bad cable. Controller refers to the enclosure SATA-to-USB bridge. I have 5x WD Elements, one WD MyBook and 2x NexStat 3.1. They're all using ASMedia. The WD Elements are prone to overheating if there's no appropriate ventilation. I've had to adhere tiny heatsinks to two of them in order to resolve overheating as they operate at higher ambient temperature. Crucially all of this overheating has occurred under load. Without loading them, it's all looks fine and dandy. ZFS did not lose me any data when any of this happened and as I addressed it.

[โ€“] avidamoeba@lemmy.ca 7 points 9 months ago

Subscribe for a monthly donation and take this shitpost all the way to the top! ๐Ÿ”

view more: โ€น prev next โ€บ