Hello guys,
i hope you can help me with this, since i need to go into production soon:
tldr.:
Write speed on 2 ZFS pools is fine on the PM Host but inside VMs there are major issues with "larger writes".
What is the correct disk configuration for VMs on a ZFS pool for win10 and linux?
I've recently build the first of our two future servers with ZFS:
https://forum.proxmox.com/threads/server-config-zfs-encryption-cpu-load-questions.71620/#post-323204
Some VMs are installed and working fine, the plan was to test IO of the HDD mirror (for storage applications) and then order a second one.
When copying a lot of files onto a VM (tested with linux and windows10) copy speed drops to 0 after some seconds.
Same goes for copying files inside a VM.
I then found this: https://dannyda.com/2020/05/24/how-to-fix-proxmox-ve-zfs-pool-extremely-slow-write/
and added a SLOG via usb3 (400mbps/ writes tested) which only delayed the issue by some seconds.
I went on to test write performance on the zfs mirror with SSDs:
On the host (/hddmirror/encrypted/backup or /vmdata/encrypted) there is no issue writing with 80-100mb/s+ =gigabit is the limit
On the VM disk located on the ZFS SSD mirror the writes drop to 50mb/s and i notice short therm cpu load spikes on the host.
When copying the 17gb file inside the linux vm via cp the vm becomes unresponsive and takes forever.
A quick look with iotop reveals that the copy process starts at 300mb/s and then drops to 0 while IO % stays at 99.99.
Copy resumes at some twi digit mb/s and drops back to 0 repeatedly. CPU load on the Host sometimes rises to 70% with 16 cores.
Did i miss some major ZFS rule like:
Only use disk type X on zfs pool ?
I hoped i could blame ZIL and SLOG but as mentioned i can write fine onto the host, it never acts out.
Will dump configuration infos in the next post.
Thank you for your assistance!!
i hope you can help me with this, since i need to go into production soon:
tldr.:
Write speed on 2 ZFS pools is fine on the PM Host but inside VMs there are major issues with "larger writes".
What is the correct disk configuration for VMs on a ZFS pool for win10 and linux?
I've recently build the first of our two future servers with ZFS:
https://forum.proxmox.com/threads/server-config-zfs-encryption-cpu-load-questions.71620/#post-323204
Some VMs are installed and working fine, the plan was to test IO of the HDD mirror (for storage applications) and then order a second one.
When copying a lot of files onto a VM (tested with linux and windows10) copy speed drops to 0 after some seconds.
Same goes for copying files inside a VM.
I then found this: https://dannyda.com/2020/05/24/how-to-fix-proxmox-ve-zfs-pool-extremely-slow-write/
and added a SLOG via usb3 (400mbps/ writes tested) which only delayed the issue by some seconds.
I went on to test write performance on the zfs mirror with SSDs:
On the host (/hddmirror/encrypted/backup or /vmdata/encrypted) there is no issue writing with 80-100mb/s+ =gigabit is the limit
On the VM disk located on the ZFS SSD mirror the writes drop to 50mb/s and i notice short therm cpu load spikes on the host.
When copying the 17gb file inside the linux vm via cp the vm becomes unresponsive and takes forever.
A quick look with iotop reveals that the copy process starts at 300mb/s and then drops to 0 while IO % stays at 99.99.
Copy resumes at some twi digit mb/s and drops back to 0 repeatedly. CPU load on the Host sometimes rises to 70% with 16 cores.
Did i miss some major ZFS rule like:
Only use disk type X on zfs pool ?
I hoped i could blame ZIL and SLOG but as mentioned i can write fine onto the host, it never acts out.
Will dump configuration infos in the next post.
Thank you for your assistance!!
Last edited: