ZFS quite slow on 10Gb/s uas-devices

gunpie

Member
Mar 20, 2021
19
0
21
I'm currently move my PVE to new hosts which have no SATA-Interfaces, thus i decided using USB-boxes (uas) with 4 bays as storage.
Initial test shows that the boxes provided reasonable performance:

root@mini02:~# dd bs=8k of=/dev/null if=/dev/sda&
root@mini02:~# dd bs=8k of=/dev/null if=/dev/sdb&
root@mini02:~# dd bs=8k of=/dev/null if=/dev/sdc&
root@mini02:~# dd bs=8k of=/dev/null if=/dev/sdd&

and the performance looked good:

root@mini02:~# S_COLORS=never S_TIME_FORMAT=ISO iostat -xczNtm 5
...
2025-10-27T00:14:10+0100
avg-cpu: %user %nice %system %iowait %steal %idle
0.03 0.00 0.65 11.65 0.00 87.67

Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 13.60 0.15 0.00 0.00 1.22 11.47 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.50 0.02 0.82
nvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 12.20 0.15 0.00 0.00 1.05 12.79 0.00 0.00 0.00 0.00 0.00 0.00 0.40 1.00 0.01 0.66
sda 2019.00 252.38 0.00 0.00 0.97 128.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.96 100.00
sdb 2033.40 254.17 0.00 0.00 0.97 128.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.96 100.00
sdc 2024.60 253.07 0.00 0.00 0.97 128.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.96 100.00
sdd 2010.40 251.30 0.00 0.00 0.98 128.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.96 100.00

However after initializing the devices for ZFS, the performance of "zfs send" (on a subvolume) is quite low:

2025-10-27T00:07:32+0100
avg-cpu: %user %nice %system %iowait %steal %idle
0.41 0.00 1.35 0.03 0.00 98.21

Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 12.00 0.12 0.00 0.00 1.38 10.27 0.00 0.00 0.00 0.00 0.00 0.00 0.40 1.00 0.02 0.86
nvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 12.60 0.12 0.00 0.00 1.00 9.78 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.50 0.01 0.68
sda 81.00 19.10 0.00 0.00 35.03 241.51 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.84 82.88
sdb 81.60 19.73 0.00 0.00 33.93 247.54 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.77 83.08
sdc 103.60 20.32 0.00 0.00 8.05 200.88 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.83 26.30
sdd 102.40 19.57 0.00 0.00 4.58 195.75 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.47 14.96

Doing the same operation on the original host with SATA-Interfaces on disks of the same type and identical initialization looks like this:

root@adam:~# S_COLORS=never S_TIME_FORMAT=ISO iostat -xcmzNt 5
...
2025-10-30T22:56:59+0100
avg-cpu: %user %nice %system %iowait %steal %idle
9.14 0.00 9.57 0.91 0.00 80.38

Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 13.20 0.10 0.00 0.00 0.47 7.39 0.00 0.00 0.00 0.00 0.00 0.00 0.40 2.00 0.01 0.36
nvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 13.20 0.10 0.00 0.00 0.24 7.39 0.00 0.00 0.00 0.00 0.00 0.00 0.40 2.00 0.00 0.22
sda 409.60 115.64 0.00 0.00 5.57 289.11 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.28 79.60
sdb 406.20 115.12 0.00 0.00 6.06 290.21 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.46 83.86
sdc 374.80 130.14 0.00 0.00 5.98 355.56 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.24 77.52
sdd 377.80 130.61 0.00 0.00 6.56 354.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.48 85.30

Any idea to explain this discrepancies in performance?
 
Last edited:
yes. your USB enclosure doesnt do what you think it does.

Just because the marketing says its "USB 3.2 10gbit blah blah blah" doesnt mean that your host connection can, that the cable can, that the bridge chip can, nor the sata multiplexer.

you're getting 400MB/S to the storage. I'd call that a win.
 
Last edited:
  • Like
Reactions: Johannes S
Sorry, but the initial test shows clearly, the the setup functions well at 10Gb/s USB-Speed with impressive IOP/s and fast parallel responses from the disks.
The USB "zfs send" looks as if it process just one request at a time instead and thust delivers very poor performance, but the above proves that the devices can do more.

The stats from the SATA-host are also poor, compared to the stream test, but show acceptable performance.
 
Sigh, no responses yet, but I have some ideas about the problem, just to let you know:

- the initial test relies on the kernel to create efficient I/O requests:
dd bs=8k of=/dev/null if=/dev/sdX
yields 128k requests forwarded to the uas-device, which is apparently good.

- zfs send issues larger requests, which apparently yield less efficient I/O-request sizes.
This clearly seen in the iostat-snippets.

- I've observed similar problems with USB attached devices on various hosts (mostly raspberries)

bs=8k yields too much kernel load, but
bs=128k performs often better than bs=256k (or more usual bs=1024k)

- SATA-Hosts deliver better perfomance with an interesting response to hdparm:

root@adam:~# hdparm /dev/sda
/dev/sda:
multcount = 0 (off)
IO_support = 1 (32-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 2431599/255/63, sectors = 39063650304, start = 0

while on the USB host:

root@mini02:~# hdparm /dev/sda
/dev/sda:
multcount = 0 (off)
readonly = 0 (off)
readahead = 256 (on)
geometry = 19074048/64/32, sectors = 39063650304, start = 0

I think i must do a deep dive into low level I/O to identify the root cause, sigh...