Hey,
I'm facing a weird issue here, I have NUC11i7 with one Crucial MX500 4TB SSD and one Crucial CT4000P3PSSD8 4TB NVMe SSD in a ZFS Mirror one. I've capped ZFS to 8GB of Memory (the host has 64GB in total) via the /etc/modprobe.d/zfs.conf file.
Everytime I have a bit more IO (especially observed this during write, read seems to be ok), I start to get an io delay of 30-50% and services start to freeze until the IO is over. I've just had the case again while I restored a VM from a QNAP on the network, the limiting factor here should have been the QNAPs' disks which don't allow more than 60mb/s read, however during this restore the IO Delay jumped up to 50% and the two VMs on that host stopped responding until it was done.
Does anyone have an idea? I'm aware that the mix of NVMe and SATA SSD will limit me to the smallest available write speed, however that should still be around 500MB/s and not cause any kind of io delay given that the restore wasn't even 1/5 of the speed.
For reference, the restore even showed this:
restore image complete (bytes=34359738368, duration=2378.50s, speed=13.78MB/s)
/etc/modprobe.d/zfs.conf
arc_summary: https://pastebin.com/hUH2FzrS
pve_perf: https://pastebin.com/GeGxh8ig
iostat: https://pastebin.com/8WtMv8jV
zpool get all rpool: https://pastebin.com/sP4qQmYL
I'm facing a weird issue here, I have NUC11i7 with one Crucial MX500 4TB SSD and one Crucial CT4000P3PSSD8 4TB NVMe SSD in a ZFS Mirror one. I've capped ZFS to 8GB of Memory (the host has 64GB in total) via the /etc/modprobe.d/zfs.conf file.
Everytime I have a bit more IO (especially observed this during write, read seems to be ok), I start to get an io delay of 30-50% and services start to freeze until the IO is over. I've just had the case again while I restored a VM from a QNAP on the network, the limiting factor here should have been the QNAPs' disks which don't allow more than 60mb/s read, however during this restore the IO Delay jumped up to 50% and the two VMs on that host stopped responding until it was done.
Does anyone have an idea? I'm aware that the mix of NVMe and SATA SSD will limit me to the smallest available write speed, however that should still be around 500MB/s and not cause any kind of io delay given that the restore wasn't even 1/5 of the speed.
For reference, the restore even showed this:
restore image complete (bytes=34359738368, duration=2378.50s, speed=13.78MB/s)
/etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=8589934591
options zfs zfs_arc_max=8589934592
arc_summary: https://pastebin.com/hUH2FzrS
pve_perf: https://pastebin.com/GeGxh8ig
iostat: https://pastebin.com/8WtMv8jV
zpool get all rpool: https://pastebin.com/sP4qQmYL
Last edited: