Virtualization isn't required for docker on Linux generally, unless a container tries to use KVM or something like that. Also docker already exists in Ubuntu's repos under the docker.io
package so that's the easiest place to download (apt install docker.io
) from.
avidamoeba
Due to risk of failure or risk of data corruption because the mirror can't tell which drive is right when there's a difference?
Already done. I'm just trying to exhaust all the hypotheses I have in case I stumble upon a durable workaround that is applicable for others and cheaper.
I've been trying. Nothing has worked so far. I've got a few more cables/permutations to try.
Get more drives, run higher redundancy 💪
You're right, the correct term is Gb/s or Gbps. Edited.
The thing is that I'm already at the last couple of leaves in the investigation tree and I'm not willing to change anything upwards of the USB driver level. That's why there isn't much point in getting people to spin their wheels for solutions I can't or won't apply. If I was completely unable to get the data corruption and disconnects under control, I'd trash the system and replace it with Intel. Fortunately, a PCIe add-in USB controller seems to work well so I avoided the most costly solution. At this point I don't actually need to get the motherboard ports to work well but I'm curious to follow down the signalling rabbit hole because I'm not the only one who's having this problem and the problem doesn't affect just this one use case. If I find a solution like an in-line 5Gb USB hub (reduces data rate), or just using USB-C ports instead of USB-A (reduces noise), or using this kind of cable instead of that kind, I could throw that as a cheaper workaround in this ZFS thread and elsewhere. The PCIe cards work but aren't cheap.
Unfortunately it won't because the transfers are happening between ZFS and the hardware storing the data so I can't control the data rate at the application level (there are many different applications) or even at the ZFS level. This is why in this particular case I'm stuck with a potential hardware-related workaround. I mean I could do something stupid like configuring a suboptimal recordsize in ZFS but I'd prefer to get the hardware to stop losing bits and hoping ZFS would catch that. Decreasing data rates is a generally acceptable strategy to deal with signalling issues, if the decreased rate is usable for the application at hand. In my case it is.
I am trying to transfer data via USB at high speed without data corruption, silent resets and occasional device disconnects. Those are things that happen because the USB controllers on my motherboard made by AMD with some help from ASMedia do not function correctly at the speed they advertise. So given the problem the right solution is to get a firmware or hardware fix for these USB controllers, however that's unlikely to happen. So I'm trying to find a workaround. I already have one (PCIe add-in card) but now I'm also testing running the bad controllers at half-speed which seems successful so far but I was wondering if there's a way to do it in software. I'm currently bottlenecking the links by using 5Gb hubs between the controllers and the devices.
Yup. A USB host controller. Specifically AMD Bixby.
Great question. In short, garbagy AMD USB controllers. I recently switched to a newer AMD board and have been hit with the same issues faced by these poor sods. I've been conducting testing over the last week, different combinations of ports, cables, loads, add-in PCIe USB controllers. The add-in cards seem to behave well, which is one way the folks from that thread solved their problems. The other being changing to Intel-based systems. Yesterday however I was watching an intro about USB redrivers by TI and they were discussing various signalling issues that could occur and how redrivers help. That led me to form the hypothesis that what I'm experiencing might be signalling related. E.g. that the combination of controllers/ports/cables simply can't handle 10Gbps. That might be noise from some of those devices or surrounding ones that causes signal loss when operating at 10Gbps, speeds this setup can actually achieve. In order to test that I tried placing the DAS boxes behind a 5Gb hub plugged in a port that has previously shown a failure. So far it's stable. This is why I was wondering whether there's some magic in the kernel that could allow configuring 10Gb ports to operate at 5Gb.
I'd advise against using docker from docker.com's repo on Ubuntu unless you need to. Ubuntu LTS includes a fairly recent docker package starting with 22.04. By using that you eliminate the chance for breakage due to a defective or incompatible docker update. You also get the security support for it that comes with Ubuntu. The package is
docker.io
.