IO performance below what I'd expect

surfi2000

New Member
Dec 19, 2025
3
0
3
I’m experiencing issues with transfer speeds on a large file (40GB). Since the issue even happens when copying between two Debian VMs on the same host, I'm certain it's I/O vs. something like network. The transfer starts quickly and then grinds to a halt (~1MB/s-10MB/s vs. 150MB/s-200MB/s at the start) after about 20% of the transfer. I've been reading online and I don't seem to be alone. However, it seems that people agree 10MB/s is particularly low. I have enterprise CMR drives (ST14000NM0288). The host has 128GB of RAM and 104 x Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz (2 Sockets) so at least RAM / CPU is nowhere near consumed. The controller in the host is in HBA mode.

Code:
TRAN   NAME               TYPE   SIZE VENDOR   MODEL           LABEL            ROTA PHY-SEC
       sda                disk  12.6T IBM-ESXS ST14000NM0288 E                     1    4096
       ├─sda1             part  12.6T                          sasstorage          1    4096
       └─sda9             part    64M                                              1    4096
       sdb                disk  12.6T IBM-ESXS ST14000NM0288 E                     1    4096
       ├─sdb1             part  12.6T                          sasstorage          1    4096
       └─sdb9             part    64M                                              1    4096
       sdc                disk  12.6T IBM-ESXS ST14000NM0288 E                     1    4096
       ├─sdc1             part  12.6T                          sasstorage          1    4096
       └─sdc9             part    64M                                              1    4096
       sdd                disk  12.6T IBM-ESXS ST14000NM0288 E                     1    4096
       ├─sdd1             part  12.6T                          sasstorage          1    4096
       └─sdd9             part    64M                                              1    4096
       sde                disk  12.6T IBM-ESXS ST14000NM0288 E zfs39               1    4096
       └─sde1             part  12.6T                          CHIA1-ZHZ1NZTL00    1    4096
sata   sdf                disk 476.9G ATA      DELLBOSS VD                         1     512
       ├─sdf1             part  1007K                                              1     512
       ├─sdf2             part     1G                                              1     512
       └─sdf3             part 475.9G                                              1     512
         ├─pve-swap       lvm      8G                                              1     512
         ├─pve-root       lvm     96G                                              1     512
         ├─pve-data_tmeta lvm    3.6G                                              1     512
         │ └─pve-data     lvm  348.8G                                              1     512
         └─pve-data_tdata lvm  348.8G                                              1     512
           └─pve-data     lvm  348.8G                                              1     512
       zd0                disk   700G                                              0   16384
       ├─zd0p1            part   699G                          cloudimg-rootfs     0   16384
       ├─zd0p14           part     4M                                              0   16384
       ├─zd0p15           part   106M                          UEFI                0   16384
       └─zd0p16           part   913M                          BOOT                0   16384
       zd16               disk     4M                                              0   16384
       zd32               disk     4M                                              0   16384
       zd48               disk    20T                                              0   16384
       ├─zd48p1           part    32M                          hassos-boot         0   16384
       ├─zd48p2           part    24M                                              0   16384
       ├─zd48p3           part   256M                                              0   16384
       ├─zd48p4           part    24M                                              0   16384
       ├─zd48p5           part   256M                                              0   16384
       ├─zd48p6           part     8M                                              0   16384
       ├─zd48p7           part    96M                          hassos-overlay      0   16384
       └─zd48p8           part    20T                          hassos-data         0   16384

Bash:
~# zpool status -v
  pool: sasstorage
 state: ONLINE
  scan: scrub repaired 0B in 13:30:48 with 0 errors on Sun Dec 14 13:54:50 2025
config:

    NAME        STATE     READ WRITE CKSUM
    sasstorage  ONLINE       0     0     0
      sda       ONLINE       0     0     0
      sdb       ONLINE       0     0     0
      sdc       ONLINE       0     0     0
      sdd       ONLINE       0     0     0

errors: No known data errors

IO pressure stall in the Proxmox UI for the node in question hovers around 8% for "some" and "full". No memory pressure and no CPU pressure.

For the VM the VirtIO SCSI is set for the SCSI controller. Cache is set to "no cache". "Discard" and "SSD emulation" are turned on.

Any idea how to troubleshoot this further?
 
I use ZFS for example: vdev0: 3x HDD Drive ZFS RaidZ1, vdev1: 3x HDD Drive ZFS RaidZ1 and vdev2: nx SSDs as ZFS special deivce.
So a full SSD Special Device is a must with HDDs.

So i can read/ write with 2.5 GBit/s Network speed.

Please read OpenZFS Documentation/ Wiki.
 
  • Like
Reactions: UdoB
Have you tried if the same happens with two CTs? The last time I tried HDDs with ZVOLs it was a pretty bad experience.
 
Have you tried if the same happens with two CTs? The last time I tried HDDs with ZVOLs it was a pretty bad experience.

Getting between 100-250MB/s between two CTs which seems much better. Any tweaks I should look into or is it a bit of a lost cause?

So i can read/ write with 2.5 GBit/s Network speed.
I'll take a look at the docs but I'm also not in need of this type of throughput. I'm just looking to get the standard throughput a HDD would have
 
I'm sure there's a ton of tweaks you can do. The easiest (but unsafe) way is zfs set sync=disabled .... A PLP SSD based SLOG mirror would be safer, I guess.
Google zvol iowait and take a look at this: https://github.com/openzfs/zfs/issues/11407
It's closed but, as you can see, it's still very much an issue. I don't use HDDs myself any more so I'm a bit out of touch with this.
You might also find some of this of use: https://gist.github.com/Impact123/3dbd7e0ddaf47c5539708a9cbcaab9e3#io-debugging
 
Last edited:
Thank you, I'll give those a read. I did try to disable sync since I don't care about the data at play but it didn't help