I hadn't noticed this previously as most of my VMs are pretty light in disk I/O, but after installing a Win10 VM for some game streaming experiments I found it to be somewhat sluggish. I ran some disk benchmarks in the hypervisor as well as a Linux VM and the Win10 VM. The Linux VM benchmark showed half the throughput and IOPS as the parent hypervisor benchmark, and the Win10 VM had about a quarter the throughput and IOPS.
The volume with the VM disks is 4 SAS SSDs arranged in a ZFS stripe across two mirrors. I've tried the Windows VM with both VirtIO SCSI and VirtIO SCSI Single, with and without iothreads enabled. The Windows VM disk is formatted as NTFS, I've tried with both 4k clusters (Windows default) and 8k (matching the 8k volblocksize for the zvol). The Linux VM disk is ext4.
Benchmark results here: https://pastebin.com/DyfG2Sai
Disk and VM configs/properties here: https://pastebin.com/1kJRKbzE
I'm not really sure how to trace down where the bottleneck is between the VM I/O calls and the hypervisor. Are there any options I should be changing here, or other information to gather to help track it down?
Thanks!
The volume with the VM disks is 4 SAS SSDs arranged in a ZFS stripe across two mirrors. I've tried the Windows VM with both VirtIO SCSI and VirtIO SCSI Single, with and without iothreads enabled. The Windows VM disk is formatted as NTFS, I've tried with both 4k clusters (Windows default) and 8k (matching the 8k volblocksize for the zvol). The Linux VM disk is ext4.
Benchmark results here: https://pastebin.com/DyfG2Sai
Disk and VM configs/properties here: https://pastebin.com/1kJRKbzE
I'm not really sure how to trace down where the bottleneck is between the VM I/O calls and the hypervisor. Are there any options I should be changing here, or other information to gather to help track it down?
Thanks!