Freezing/stutter of guest OS under high host IO

Blake.v

New Member
Jul 17, 2017
5
0
1
34
This seams to be a well known issue, but I have yet to find a solution. When transferring files from the guest vm (windows 10 with GPU passthough) to the ZFS array the windows vm becomes nearly unusable. I em fairly new to Proxmox (I was running esxi on this hardware just fine). The windows guest had bean running well for 2-3 weeks before I installed the raid card and setup ZFS.

The windows VM is running off the NVMe, and is the only VM right now.

Hardware:
Ryzen 1700
32gb ram
ASRock x270
1TB NVMe drive for Proxmox, VM storage and slog
LSI 9205-8i with SAS expander backplane and 4x4TB drives in RaidZ1

Thanks
 
Part of the problem was self inflicted, I forgot ZFS would grab half the total ram by default, lowered that to 8GB, that got me from total freezing to mild annoyance.
The card is in IT mode.
I just updated to the latest firmware to see if it would help, but doesn't seam to have made much of a difference.
Still seeing ~15% io delay and inconsistent slow transfers (bouncing from 10-200MB/s).

This is just an old array of 5400rpm drives for bulk media storage, but I planned to pickup 12 7200rpm SAS drives for a much larger and faster array. And I expect this issue to get worse.
 
Part of the problem was self inflicted, I forgot ZFS would grab half the total ram by default, lowered that to 8GB, that got me from total freezing to mild annoyance.

Unfortunately, this conclusion is wrong. The more RAM ZFS has, the more blocks are cached in the ARC and the lower I/O delay you see in a mixed read/write system (as with any other filesystem). Another point is that the ARC ZFS uses gets freed if you need more RAM for other things like VMs and due to the first point, this decreases the performance of the system. Also, the workload on your system can have a huge impact on the performance. If you have a lot of sync writes, your single vdev storage cannot keep up. This and only this problem can be fixed by a dedicated SLOG device in ZFS.

4 drives in RAIDz1 cannot be fast and you cannot expect miracles. If you setup a new array, try to have as many vdevs as possible. The vdevs are the RAID0 layer so that multiple vdevs scale perfectly. Having ESXi on the RAID is a much better option, performance wise, but you will not have the cool features ZFS offers. ZFS can be fast, but you need to have the hardware for that. In direct comparison of ZFS to the same disks with a good RAID controller (BBU and Write Cache), ZFS looses (performance wise).

I'm also running a RAIDz1 on 4 disks in my home setup and it works, it's not fast but I can enjoy the features of ZFS. I also plan to upgrade to more vdevs to increase performance.
 
This is not an issue with zfs. This is not a new array, it's mostly full of media.
The vm is NOT on the zfs array. And the ZFS array is not subject to mixed read/write.

The VM is installed on the NVMe drive, and proxmox is sharing the ZFS array via samba.
If I transfer a bunch of files from the VM C: or D: drive (a passed though SSD) to the mapped network drive I see huge io delay and stuttering of the VM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!