this post was submitted on 26 Jan 2024
46 points (96.0% liked)
Linux
48338 readers
440 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Almost certainly, the bottleneck is one or both of:
The platters can’t simply spin at full speed reading a sequential stream of bytes from one and writing it to another - they periodically have to search around to different places stitching the file’s byte stream together from discontinguous chunks or reading or writing metadata. Seek latency of the platter will overshadow any tiny delays incurred because of memory or CPU delays.
The algorithm is doing something in a fashion that causes delays (e.g. reading each file individually and waiting until it can sort out if it needs to send anything for that file before starting I/O operations for the next).
Idk if you can do anything about #1 but in similar situations I’ve had good mileage preventing #2 with “tar cj /somewhere | ssh me@host ‘cat | tar xj’” (roughly speaking, you obviously may have to adjust things to make it actually work, and on very fast networks maybe it’s better to skip the -j, but that’s the rough idea).
Edit: Oh, I misread, is this local? I saw rsync and just though it was a network transfer. What kind of speeds are you getting? Does doing “tar c /original | tar x” or something like that work any faster?
I don't want to interrupt it, but I could try that next.
The target drive is an ironwolf 7200 HDD, and the source drive is a WD Blue HDD, and I can't see the speeds clearly because I'm doing through the OMV webUI, but IN THEORY both drives are capable of greater than 5Gb/s file transfer... The seagate drive is connected via SATA to USB dock, running through a usbSS port on the machine, and the WD blue is running directly through a sata port on the controller. With the listed speeds, the transfer could have been completed as fast as within an hour and a half, but we're coming up on 3 hours now.
I figured it was likely what you mentioned: fragmented files on the volumes and the algorithm being safe by first checking for missing data in the target drive and then sending the bytes, then marking the file complete, ect - but honestly it would surprise me if the added steps would amount to that big of a performance hit. I thought maybe the external sata-to-usb dock could be causing the bottleneck, but that dock is still marketed at 5Gb/s...
shrugs
As I mentioned in my previous post below, even in theory a spinning platter is not going to reach anywhere near 5Gb/s speeds, not even 1/20th of that. You can google the specs as easily as I can, but a 4TB WD Blue drive is only 5400rpm which seriously hampers its speed, limiting it to about 175MB/s (bytes, not bits).
The 4TB Seagate Ironwolf is another slow drive at only 5900rpm, but does manage to creep up to about 190MB/s transfer speeds.
You didn't mention which one is your 4TB drive, but the speed of the slowest drive is going to dictate your top transfer speeds. No matter how you slice it, you can expect a long wait to transfer 4TB of data. If you want more speed, you can get better performing 7200rpm drives, but you won't see any substantial increases until you move into a multi-drive RAID. I would recommend a minimum of 5 drives, but for comparison I have eight 18TB drives set up through ZFS as a raidZ2 configuration (similar to RAID6) which gives me a sustained transfer rate of around 450MB/s. If you need faster, you really have no choice but to upgrade to SSD.
Thank you for this, I have clearly misunderstood the rated speeds of the drives.
Now I feel silly for having thought the 6Gb/s stated in the product title on Microcenter as an indication of the speed (and for having not thought twice about it). It does say in the product details: "[...] data transfer speeds of up to 210 MB/s". I guess they were simply saying that 6Gb/s is what the SATAIII interface is capable of? 6Gb/s is in the listed product title on amazon/micro center, and I was obviously duped by this.
I feel a little silly having believed it without really questioning it.
Haha and now you know exactly WHY they do that! The manufacturers were more than happy to let people keep believing SATA3 drives would be faster than SATA2 drives until they started facing public backlash and the costs of returns, but they still try to bury it in the fine print.
Keep in mind that any transfer speeds on the box are also going to be best-case scenarios, for read access only (because writing takes longer than reading even on an SSD). The numbers I found on reviews are generally going to be more real-world conditions including a combination of simultaneous read/write operations. Personally I don't trust anything except what I can get in my own installations because everyone's hardware and software are different, but if you decide to do your own testing make sure that it disables cached operations during the tests or you're not doing anything but checking the speed of your RAM.