After creating ZFS mirror IO delay is high

gusto

Well-Known Member
Feb 10, 2018
85
2
48
25
Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-5-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-12
pve-kernel-5.13: 7.1-8
pve-kernel-5.13.19-5-pve: 5.13.19-13
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-3
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-6
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1
I connected two new WD Blue SSDs via USB 3.0 to the server. This adapter is used (both SSDs).

Code:
Disk /dev/sdd: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: 100T2B0A-00SM50
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: A81159BE-A4A0-6747-ACA9-080E2A240690

Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: 100T2B0A-00SM50
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 509CE6DB-165A-6442-B928-664CD205A5EF
disks01.png
Note the serial number.
Code:
ls -l /dev/disk/by-id
total 0
lrwxrwxrwx 1 root root  9 Mar 10 14:50 ata-Samsung_SSD_870_EVO_250GB_S6PENJ0RB40583V -> ../../sdb
lrwxrwxrwx 1 root root 10 Mar 10 14:50 ata-Samsung_SSD_870_EVO_250GB_S6PENJ0RB40583V-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Mar 10 14:50 ata-Samsung_SSD_870_EVO_250GB_S6PENJ0RB40583V-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Mar 10 14:50 ata-Samsung_SSD_870_EVO_250GB_S6PENJ0RB40583V-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Mar 10 14:50 ata-Samsung_SSD_870_EVO_250GB_S6PENJ0RB40585D -> ../../sda
lrwxrwxrwx 1 root root 10 Mar 10 14:50 ata-Samsung_SSD_870_EVO_250GB_S6PENJ0RB40585D-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Mar 10 14:50 ata-Samsung_SSD_870_EVO_250GB_S6PENJ0RB40585D-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Mar 10 14:50 ata-Samsung_SSD_870_EVO_250GB_S6PENJ0RB40585D-part3 -> ../../sda3
lrwxrwxrwx 1 root root 11 Mar 10 14:50 lvm-pv-uuid-PsS2aq-dT3d-NP8t-Guiu-y0n0-TVva-EGjWJ1 -> ../../zd0p5
lrwxrwxrwx 1 root root  9 Mar 10 19:06 usb-WDC_WDS_100T2B0A-00SM50_98765432100C-0:0 -> ../../sdc
lrwxrwxrwx 1 root root  9 Mar 10 14:50 wwn-0x5002538f31b23c5d -> ../../sdb
lrwxrwxrwx 1 root root 10 Mar 10 14:50 wwn-0x5002538f31b23c5d-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Mar 10 14:50 wwn-0x5002538f31b23c5d-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Mar 10 14:50 wwn-0x5002538f31b23c5d-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Mar 10 14:50 wwn-0x5002538f31b23c5f -> ../../sda
lrwxrwxrwx 1 root root 10 Mar 10 14:50 wwn-0x5002538f31b23c5f-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Mar 10 14:50 wwn-0x5002538f31b23c5f-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Mar 10 14:50 wwn-0x5002538f31b23c5f-part3 -> ../../sda3

Note again that only sdc is listed in the ID (no sdd). Proxmox is installed in ZFS raid1 on sda and sdb. I want to create a new pool for the data. Since I can't create a by-id I have to create a by-path.

Code:
 ls -l /dev/disk/by-path
total 0
lrwxrwxrwx 1 root root  9 Mar 10 14:50 pci-0000:00:12.0-ata-1 -> ../../sda
lrwxrwxrwx 1 root root  9 Mar 10 14:50 pci-0000:00:12.0-ata-1.0 -> ../../sda
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-1.0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-1.0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-1.0-part3 -> ../../sda3
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-1-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-1-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-1-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Mar 10 14:50 pci-0000:00:12.0-ata-2 -> ../../sdb
lrwxrwxrwx 1 root root  9 Mar 10 14:50 pci-0000:00:12.0-ata-2.0 -> ../../sdb
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-2.0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-2.0-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-2.0-part3 -> ../../sdb3
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-2-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-2-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Mar 10 14:50 pci-0000:00:12.0-ata-2-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Mar 11 11:02 pci-0000:00:15.0-usb-0:1:1.0-scsi-0:0:0:0 -> ../../sdc
lrwxrwxrwx 1 root root  9 Mar 10 14:56 pci-0000:00:15.0-usb-0:2:1.0-scsi-0:0:0:0 -> ../../sdd
Two last lines
I'll create a zpool now
Code:
zpool create datapool mirror pci-0000:00:15.0-usb-0:1:1.0-scsi-0:0:0:0 pci-0000:00:15.0-usb-0:2:1.0-scsi-0:0:0:0
Everything seems to be working
Code:
zpool status
  pool: datapool
 state: ONLINE
config:

        NAME                                           STATE     READ WRITE CKSUM
        datapool                                       ONLINE       0     0     0
          mirror-0                                     ONLINE       0     0     0
            pci-0000:00:15.0-usb-0:1:1.0-scsi-0:0:0:0  ONLINE       0     0     0
            pci-0000:00:15.0-usb-0:2:1.0-scsi-0:0:0:0  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
config:

        NAME                                                     STATE     READ WRITE CKSUM
        rpool                                                    ONLINE       0     0     0
          mirror-0                                               ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_250GB_S6PENJ0RB40585D-part3  ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_250GB_S6PENJ0RB40583V-part3  ONLINE       0     0     0

errors: No known data errors
The IO delay starts to increase in a few seconds (minutes).
highio.png
In the Proxmox GUI, I want to look at disks, but the operation fails and the disks do not appear.
If I put in the fdisk -l shell, the operation will not complete. When I destroy the pool, the IO delay is still high. When I physically disconnect the SSD, the IO delay stabilizes.
I still have two other SSDs, so I tried to replace these disks, but the result was repeated. I suspect this adapter. It always shows the same SN (98765432100C) for different discs. I contacted axagon and got the answer that their device is fine and I have to solve the problem with OS support.
 
Last edited:
Have I understand you correctly: You're using the two SSD in question via USB3? I tried this once and it had terrible performance. USB3 is not a low latency protocol, so the I/O delays are "by design".
 
Yes, I have these SSDs connected via USB 3.0 ports. If I create a single ZFS pool from one SSD, everything works perfectly. But when I create a ZFS mirror, then there is a problem.
 
Likely the USB ports share some bandwidth. For one device bandwidth etc is sufficient. For two obviously not. Usb IMHO is not suitable for ZFS.
 
  • Like
Reactions: Neobin