Windows Guest - Disk I/O and Throughput Performance

jdmac87

New Member
Jan 17, 2024
7
0
1
Hi All,

I'm trying to identify what is the culprit for noted performance degradation specifically on Windows guests.

Appreciate any help or advice, below is the write up of details.

Background​

Usually I run exclusively linux guests on my Proxmox cluster, and these achieve (fio tested) in the region of 200 MBps / 50k IOPS.
For my use, this is perfectly acceptable and is based on the guest OS using qcow2 images stored on a shared NFS (backed off to TrueNAS Scale).

Symptoms / Issue​

I recently had need to install a Windows 10 guest and did so per https://pve.proxmox.com/wiki/Windows_10_guest_best_practices - This documentation states it was built around PVE 6.X so may be outdated(?)

QEMU agents installed, as are appropriate SCSI and Network Drivers (no ballooning).

Install went successfully, drivers all recognised, just was a little slow.

Once into the system I noticed the problem experienced was that the performance isn't comparable to my Linux Guest VMs.

At peak (and brief as it is) the Reads will achieve 100 MBps and writes 70 MBps.
Most importantly, task manager shows the disk as 100% utilized at that point and you can "feel" the pausing of the OS waiting for I/O.

There's no network issues or bottlenecks, it's hardly being pressured at all.

Packet captures between Host and NFS storage look good and are utilising Jumbo Frames correctly.

Attempted "Solutions" (So far)​

- I've played around with settings around cache modes, writeback per the best practices guide, or default (No cache).
- SSD emulation toggled on/off, preferring on so Windows guest recognising SSD class.
- Changed AsyncIO from default to native.

Environment​

3 x PVE host, each with 2.5GbE NICs
TrueNAS Scale server providing NFS access to a raidz-1 pool utilizing consumer grade SSD, and has 10 GbE network connectivity.
Jumbo Frames enabled throughout and proven working.

Command Output Capture​


qm config from Windows VM​

Code:
balloon: 0
boot: order=scsi0;ide0;net0
cores: 4
cpu: x86-64-v3
ide0: none,media=cdrom
machine: pc-i440fx-8.1
memory: 8192
meta: creation-qemu=8.1.5,ctime=1712618366
name: <REDACTED>
net0: virtio=<REDACTED>,bridge=VLAN0020,firewall=1
numa: 0
ostype: win10
scsi0: shared:102/vm-102-disk-0.qcow2,cache=writeback,discard=on,iothread=1,size=64G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=2f35c84d-b0af-4a8f-a977-3d100f6eed79
sockets: 1
vmgenid: bd5e0b70-464e-4879-8f70-939bd73f633e


qm config from Linux VM​

Code:
boot: order=ide2;scsi0
cores: 4
cpu: x86-64-v3
ide2: none,media=cdrom
memory: 8192
meta: creation-qemu=8.0.2,ctime=1688530027
name: <REDACTED>
net0: virtio=<REDACTED>,bridge=vmbr0,firewall=1
net1: virtio=<REDACTED>,bridge=VLAN0005,firewall=1
net2: virtio=<REDACTED>,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: shared:101/vm-101-disk-0.qcow2,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=10c28e6e-93a4-410e-a5a9-954de815375d
sockets: 1
startup: order=1
vmgenid: 77b9d632-a42a-4208-9e7b-a9312de0703c

pveversion​

Code:
pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.13-1-pve)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!