Hello,
I am experiencing awfully slow write performance on a newly setup PVE node (single-node setup).
The server is a HP ProLiant ML 110 Gen10 with 48GB RAM, a Xeon Silver 4208. It has two NVME SSDs (WDC WDS100T2B0C-00PXH0) used as a ZFS mirror (boot device) and two 8TB Hard Disks (ST8000DM004-2CX188) also used for a ZFS mirror.
I have created a Windows Server 2019 VM with the first disk on the SSD mirror and a second disk on the HDD mirror. When copying larger files over the network to the second disk on this VM, it copies at ~80MB/s for about a minute, then stalls completely for another minute, resumes again at normal speed and so on.
I did some fio benchmarks on both mirrors. Both show extremely slow write performance; while the SSD mirror can still keep up with a 1GBit/s, the HDD mirror is below 30MB/s when writing. Tests were done using this:
Here are the results for the SSDs:
And the results for the HDDs:
Any ideas what could be causing this?
Thanks,
Andreas
I am experiencing awfully slow write performance on a newly setup PVE node (single-node setup).
The server is a HP ProLiant ML 110 Gen10 with 48GB RAM, a Xeon Silver 4208. It has two NVME SSDs (WDC WDS100T2B0C-00PXH0) used as a ZFS mirror (boot device) and two 8TB Hard Disks (ST8000DM004-2CX188) also used for a ZFS mirror.
I have created a Windows Server 2019 VM with the first disk on the SSD mirror and a second disk on the HDD mirror. When copying larger files over the network to the second disk on this VM, it copies at ~80MB/s for about a minute, then stalls completely for another minute, resumes again at normal speed and so on.
I did some fio benchmarks on both mirrors. Both show extremely slow write performance; while the SSD mirror can still keep up with a 1GBit/s, the HDD mirror is below 30MB/s when writing. Tests were done using this:
fio --ioengine=psync --filename=/dev/zvol/<pool>/test --size=9G --time_based --name=fio --group_reporting --runtime=60 --direct=1 --sync=1 --iodepth=1 --rw=<read|write> bs=<4K|4M> --numjobs=<1|4|16>
Here are the results for the SSDs:
Job | Read_KB | Read-Bandwidth_KB | Read-IOPS | Write-KB | Write-Bandwidth-KB | Write-IOPS |
read-4k-1j | 19031104 | 317179 | 79294 | 0 | 0 | 0 |
read-4k-4j | 47552652 | 792530 | 198132 | 0 | 0 | 0 |
read-4k-16j | 57251996 | 954184 | 238546 | 0 | 0 | 0 |
read-4m-1j | 156532736 | 2608835 | 636 | 0 | 0 | 0 |
read-4m-4j | 270139392 | 4502098 | 1099 | 0 | 0 | 0 |
read-4m-16j | 260710400 | 4344666 | 1060 | 14476 | 241 | 60 |
write-4k-4j | 0 | 0 | 0 | 2803156 | 46718 | 11679 |
write-4k-16j | 0 | 0 | 0 | 5008988 | 83480 | 20870 |
write-4m-1j | 0 | 0 | 0 | 8556544 | 142599 | 34 |
write-4m-4j | 0 | 0 | 0 | 8585216 | 143005 | 34 |
write-4m-16j | 0 | 0 | 0 | 8564736 | 141856 | 34 |
And the results for the HDDs:
read-4k-1j | 18907444 | 315118 | 78779 | 0 | 0 | 0 |
read-4k-4j | 47664812 | 794400 | 198600 | 0 | 0 | 0 |
read-4k-16j | 57263812 | 954380 | 238595 | 0 | 0 | 0 |
read-4m-1j | 170958848 | 2849266 | 695 | 0 | 0 | 0 |
read-4m-4j | 269844480 | 4497183 | 1097 | 0 | 0 | 0 |
read-4m-16j | 252141568 | 4201869 | 1025 | 2332 | 38 | 9 |
write-4k-4j | 0 | 0 | 0 | 5284 | 87 | 21 |
write-4k-16j | 0 | 0 | 0 | 12312 | 205 | 51 |
write-4m-1j | 0 | 0 | 0 | 1183744 | 19723 | 4 |
write-4m-4j | 0 | 0 | 0 | 1572864 | 26061 | 6 |
write-4m-16j | 0 | 0 | 0 | 2461696 | 33592 | 8 |
Any ideas what could be causing this?
Thanks,
Andreas