Huge IO performance degradation between proxmox ZFS host and WS19 VM

Progratron

Active Member
Feb 27, 2019
40
4
28
40
For more than a week, I am trying to determine the reason for the following IO performance degradation between proxmox host and a Windows Server 2019 VM(s).

I have to ask for your help guys because I've run out of ideas.

Environment data:
  • Single proxmox host, no cluster, pve 6.1-8 with ZFS
  • A few WS19 VMs all having this issue, very low load, SOHO-usage
  • ZFS sync=disabled, volblocksize for VM disks = 4k
  • VM has all the latest VirtIO drivers (0.1.173)
IO test on both VM and host with the following fio command:

Bash:
fio --filename=test --sync=1 --rw=$TYPE --bs=$BLOCKSIZE --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=1G --runtime=30

Results:

HostVM virtio scsi single, cache=none, discard
4K random read573,00 MiB/s62,50 MiB/s
64K random read1.508,00 MiB/s831,00 MiB/s
1M random read1.796,00 MiB/s1.730,00 MiB/s
4K random write131,00 MiB/s14,10 MiB/s
64 random write596,00 MiB/s62,50 MiB/s
1M random write1.081,00 MiB/s75,40 MiB/s
4K sequential read793,00 MiB/s56,20 MiB/s
64K sequential read1.631,00 MiB/s547,00 MiB/s
1M sequential read1.542,00 MiB/s2.036,00 MiB/s (?)
4K sequential write240,00 MiB/s3,42 MiB/s
64K sequential write698,00 MiB/s43,80 MiB/s
1M sequential write1.088,00 MiB/s223,00 MiB/s

As you can see almost all IO is drastically affected, the only exception is 1M reads. Writing performance is horrible.

Here is it graphically:
Writes:

1586191497068.png

Reads:

1586191434908.png

What I have tried so far: different volblocksizes on ZFS, different ZFS sync setting (left it on disabled, since the host is in DC), virtio-blk vs virtio scsi single (not much difference), writeback cache (became even worse).

Any suggestions what am I missing?
 
Last edited:
What is the config of the VM? qm config <vmid>
different ZFS sync setting (left it on disabled
If you like your data I would not do that :/
 
What is the config of the VM? qm config <vmid>

Code:
agent: 1
balloon: 8192
bootdisk: virtio0
cores: 2
cpu: host,flags=+pcid
ide2: local:iso/virtio-win-0.1.173-9.iso,media=cdrom,size=384670K
memory: 16384
name: test-ru
net0: virtio=0A:63:FA:82:9F:91,bridge=vmbr0
net1: virtio=3E:44:3D:88:E4:54,bridge=vmbr0
numa: 1
ostype: win10
scsi0: local-zfs_64k:vm-210-disk-1,discard=on,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=3c47d662-6ae3-48b9-b665-af53544b94b6
sockets: 2
vga: std,memory=512
virtio0: local-zfs_4k:vm-210-disk-0,discard=on,size=96G
virtio1: local-zfs_4k:vm-210-disk-1,discard=on,iothread=1,size=32G
vmgenid: bd48ba99-ca3e-419c-9799-ed01d93b9e01

There are different disks to see from my tests.

If you like your data I would not do that :/

Well, I thought all I am risking is 5 seconds of data... Even somewhere here I've read that when a server is in DC and has redundant power supply it should be fine. Am I missing something?
 
A few WS19 VMs all having this issue, very low load, SOHO-usage

Hi,

Can you define what is "A few WS19 VMs" (how many and exactly what OS) and what is running in this VMs(maybe a DataBase, a speciall application, whatever)?
In wich condition do you make this tests witch fio?
How do you install the WS19(=Windows Server 2019?), you have format IN ADVANCE your virtual HD or not?
Also it would be interesting to show:

1. hardware server(cpu/RAM/HDDs/nvme/....)
2. zpool status -v
3. zpool history|grep create
4. zpool history|grep set
5. arc_summary


Good luck / Bafta !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!