this post was submitted on 08 Sep 2024
32 points (97.1% liked)

Selfhosted

40329 readers
421 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I'm syncoiding from my normal RAIDz2 to a backup mirror made of 2 disks. I looked at zpool iostat and I noticed that one of the disks consistently shows less than half the write IOPS of the other:

                                        capacity     operations     bandwidth 
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
storage-volume-backup                 5.03T  11.3T      0    867      0   330M
  mirror-0                            5.03T  11.3T      0    867      0   330M
    wwn-0x5000c500e8736faf                -      -      0    212      0   164M
    wwn-0x5000c500e8737337                -      -      0    654      0   165M

This is also evident in iostat:

     f/s f_await  aqu-sz  %util Device
    0.00    0.00    3.48  46.2% sda
    0.00    0.00    8.10  99.7% sdb

The difference is also evident in the temperatures of the disks. The busier disk is 4 degrees warmer than the other. The disks are identical on paper and bought at the same time.

Is this behaviour expected?

top 15 comments
sorted by: hot top controversial new old
[–] themoonisacheese@sh.itjust.works 11 points 2 months ago (1 children)

It might be that the data to both disks saturates a common link before the second disk reaches full iops capability, and thus the driver then writes at full speed on one disk and at half speed on the other, for twice as long.

[–] lightrush@lemmy.ca 4 points 2 months ago (1 children)

I put the low IOPS disk in a good USB 3 enclosure, hooked to an on-CPU USB controller. Now things are flipped:

                                        capacity     operations     bandwidth 
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
storage-volume-backup                 12.6T  3.74T      0    563      0   293M
  mirror-0                            12.6T  3.74T      0    563      0   293M
    wwn-0x5000c500e8736faf                -      -      0    406      0   146M
    wwn-0x5000c500e8737337                -      -      0    156      0   146M

You might be right about the link problem.

Looking at the B350 diagram, the whole chipset is hooked via PCIe 3.0 x4 link to the CPU. The other pool (the source) is hooked via USB controller on the chipset. The SATA controller is also on the chipset so it also shares the chipset-CPU link. I'm pretty sure I'm also using all the PCIe links the chipset provides for SSDs. So that's 4GB/s total for the whole chipset. Now I'm probably not saturating the whole link, in this particular workload, but perhaps there's might be another related bottleneck.

[–] themoonisacheese@sh.itjust.works 3 points 2 months ago (1 children)

I'm not fully familiar with the overheads associated with all things going on on a chipset, but it's not unreasonable to think that this workload, plus whatever the chipset has to do (hardware management tasks mostly), as well as the CPU's other tasks on similar interfaces that might saturate the IO die/controller, would influence this.

B350 isn't a very fast chipset to begin with, and I'm willing to bet the CPU in such a motherboard isn't exactly current-gen either. Are you sure you're even running at PCIe 3.0 speeds too? There are 2.0 only CPUs available for AM4.

[–] lightrush@lemmy.ca 1 points 2 months ago* (last edited 2 months ago) (1 children)

B350 isn’t a very fast chipset to begin with

For sure.

I’m willing to bet the CPU in such a motherboard isn’t exactly current-gen either.

Reasonable bet, but it's a Ryzen 9 5950X with 64GB of RAM. I'm pretty proud of how far I've managed to stretch this board. 😆 At this point I'm waiting for blown caps, but the case temp is pretty low so it may end up trucking along for surprisingly long time.

Are you sure you’re even running at PCIe 3.0 speeds too?

So given the CPU, it should be PCIe 3.0, but that doesn't remove any of the queues/scheduling suspicions for the chipset.

I'm now replicating data out of this pool and the read load looks perfectly balanced. Bandwidth's fine too. I think I have no choice but to benchmark the disks individually outside of ZFS once I'm done with this operation in order to figure out whether any show problems. If not, they'll go in the spares bin.

[–] themoonisacheese@sh.itjust.works 2 points 2 months ago (1 children)

Oh wow congrats, I'm currently in the struggle of stretching an ab350m to accept a 4600G and failing.

You're right, you should hit PCIe 3 speeds and it's weird, but the fact that the drives swap speeds depending on how they're plugged in points to either drivers or the chipset.

[–] lightrush@lemmy.ca 1 points 2 months ago* (last edited 2 months ago) (1 children)

On paper it should support it. I'm assuming it's the ASRock AB350M. With a certain BIOS version of course. What's wrong with it?

[–] themoonisacheese@sh.itjust.works 2 points 2 months ago (2 children)

It's a gigabyte ab350m gaming-3 rev 1.0. it boots grub fine but then crashes right after displaying "loading Linux 6.x", CPU led flashes then dram led stays on, I have to turn it off with the PSU switch.

Either it's a rev 1.0 bug which is a thing on those motherboards, or the CPU (or igpu) is defective.

https://superuser.com/questions/1854228/proxmox-doesnt-boot-after-cpu-change

I'm currently waiting on support from both the seller and gigabyte but I don't expect anything out of it, though I'm still yet to test it in a different motherboard.

[–] lightrush@lemmy.ca 1 points 2 months ago (1 children)

Sorry I 'ever saw this, that sucks.

Turns out mine was broken too. I put the CPU in my gaming rig and it worked fine, so I bought a new motherboard and the problem is gone.

[–] lightrush@lemmy.ca 1 points 2 months ago* (last edited 2 months ago)

Iiinteresting. I'm on the larger AB350-Gaming 3 and it's got REV: 1.0 printed on it. No problems with the 5950X so far. 🤐 Either sheer luck or there could have been updated units before they officially changed the rev marking.

[–] Shadow@lemmy.ca 4 points 2 months ago (1 children)

Usually means a failing drive in my experience.

[–] lightrush@lemmy.ca 3 points 2 months ago* (last edited 2 months ago) (1 children)

Interesting. SMART looks pristine on both drives. Brand new drives - Exos X22. Doesn't mean there isn't an impending problem of course. I might try shuffling the links to see if that changes the behaviour on the suggestions of the other comment. Both are currently hooked to an AMD B350 chipset SATA controller. There are two ports that should be hooked to the on-CPU SATA controller. I imagine the two SATA controllers don't share bandwidth. I'll try putting one disk on the on-CPU controller.

[–] Shadow@lemmy.ca 5 points 2 months ago (1 children)

You could just swap the two disks and see if it follows the drive or the link.

If the drive, rma it. I don't put a lot of faith in smart data.

[–] lightrush@lemmy.ca 2 points 2 months ago

Turns out the on-CPU SATA controller isn't available when the NVMe slot is used. 🫢 Swapped SATA ports, no diff. Put the low IOPS disk in a good USB 3 enclosure, hooked to an on-CPU USB controller. Now things are flipped:

                                        capacity     operations     bandwidth 
pool                                  alloc   free   read  write   read  write
------------------------------------  -----  -----  -----  -----  -----  -----
storage-volume-backup                 12.6T  3.74T      0    563      0   293M
  mirror-0                            12.6T  3.74T      0    563      0   293M
    wwn-0x5000c500e8736faf                -      -      0    406      0   146M
    wwn-0x5000c500e8737337                -      -      0    156      0   146M
[–] Decronym@lemmy.decronym.xyz -1 points 2 months ago* (last edited 2 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NVMe Non-Volatile Memory Express interface for mass storage
PCIe Peripheral Component Interconnect Express
PSU Power Supply Unit
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

6 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

[Thread #958 for this sub, first seen 8th Sep 2024, 16:15] [FAQ] [Full list] [Contact] [Source code]