Inconsistent Windows Server 2019 Disk performance

im.thatoneguy

New Member
Oct 22, 2021
1
0
0
38
I have a ZFS pool of 8x NVME drives Kioxia CD6-R (Sequential 6.2 / 2.3 GBs r/w)

It's setup in a pool of 4x mirrors:

ZFlash:

mirror 1: nvme0|1
mirror 2: nvme2|3
mirror 3: nvme4|5
mirror 4: nvme7|8


If I install and run fio I get the expected performance of about 20GB/s read and 8GB/s writes. So the ZFS pool seems to be all good.

I created a Windows 2019 VM and tried a Virtio and SCSI-VirtIO drive. I set the cache to Write-back as per the Win2019 best practices and run CrystalDisk Mark for max performance, and NVME. Read speeds: 18GB/s, Write: 8.1GB/s very small loss, but close enough for running through a VM.

Now I try a basic copy/paste operation from a RAMDisk or from another VirtIO disk or from the RAID10 array. From every source it'll copy ok (2GB/s) for a 10-15GB/s and then come to a complete stop. 0KB/s. Then a few seconds later it'll wake up and start going again.

Ok, so maybe I should try No-Cache.

With nocache I get consistent performance but write speeds of 3GB/s a second for a second or two before total writes drop to 800MB/s. I'm now only getting 1/10th of my potential speed and less than enough to even saturate a 10gbe link let alone a 40gbe or 100gbe link.
I tried write-back cache, and direct sync and unsafe and write through and they're even slower across the board (600MB/s write speed).

Is this a ZFS disk configuration issue? Or a proxmox disk issue? Or a Windows Virtio driver issue? Is it possible to get close to native ZFS speeds for a RAID10 pool inside of Windows Server?
 
Last edited:
Hello,

Personnal experience of Win2k19 VM performances ois the best conf is:
- VirtIO SCSI controller
- BUS Device: SCSI
- SSD emulation ON ( not for perfs, but lifetime)
- Writeback(simple) cache on virtual_disk
- RAW format of virtual disk

bets regards,
 
If your pool was created with a ashift of 12 (so 4K blocksize per disk) and you got 4 mirrors striped together you want a vollbockisze of 16K (4* 4K). Proxmoxs default is 8K. Did you changed that to 16K for better performance ("Datacenter -> Storage -> YourPool -> Edit -> Blocksize" and then backup and restore all VMs so all zvols get destroyed and newly created because the volblocksize can only be set at creation of a zvol)?
If you don't need access times I would also set atime=off for your ZFS pool so not every read operation causes a additional write.

And its really hard not to only benchmark your RAM. If you use "cache mode = writeback" you are basically only measuring the speed of your RAM and as soon as that is full and can'T cache anymore the performance will drop. Same with every async read from a ZFS pool if you dont set "primarycache=metadata" before benchmarking. If you want to benchmark the drives and not the RAM you should try to do sync writes and reads without any cache enabled. But that way you will see way lower numbers than avertised.