[SOLVED] Win Server 2k25 - QEMU Disk ultra slow

JanMrlth

New Member
Nov 17, 2025
10
3
3
Hi,

I have installed a Windows Server 2025, but the IO is misserably slow. I read, that I might have to update the Disk Drives Driver to be vio, but no VirtIO driver seems to work.
The host should not be the issue.

'Update Driver' for the disk tells me I already have the best driver selected.

Is there a way to fix this without a re-install?

Thx!!
Jan


1763386730126.png1763386701637.png1763386548672.png1763386581945.png
 
Its a HPE SK Hynix 800GB NVMe SSD. It is a test setup. I created a ZFS-Pool with only that disk. (a 2nd disk will be added as mirror). Then I added a SLOG on a different partition of the same disk. I am far, far away from the speeds that disk is capable of.

Am I right, that the windows driver for the disk is the problem here? From what I understand, Windows does not use the VirtIO driver at the moment.

1763388402304.png1763388376103.png
 
Am I right, that the windows driver for the disk is the problem here?
no, Virtio-scsi driver is correctly installed for Storage Controller, showed in your screenshot.

What do you want to test ?
iops or bandwidth ?
becauses each requires different settings.
Mainly bs=4k for iops / bs=1M for bandwidth.
BTW, NVMe is limited in 1 VM at a time.
You need to test multi VM concurrently to max out NVMe.
 
Last edited:
single disk pool should be fine.
i just tried the same thing on my node here with a single disk pool of a 1.92 tb intel enterprise sata ssd (also win 2025):

1763389418908.png

those winsat results seem bogus because i doubt my sata ssd can write 1301 MB/sec.

interesting thing is that i dont have any caching enable in the vm settings and that for me it says format=raw.

here is my config:

1763389570762.png

i am using the 266 virtio drivers btw

your nvme disk should definitely outperform my sata drive, especilly if its the only vm on that drive (for me it isnt).
 
should have plp. its an 800gb drive, so mixed use enterprise.
this thing should perform well as you can see from the fio results he posted directly from the pool.

could be the settings for the virtual disk.
maybe try without writeback cache and see if that makes any difference?
 
I understand, that the Setup on the host side is not optimal.

My goal is to fix the Windows driver side first. I can move the VM onto other disk later.

But the speed is far away from what it should be . Even with ZFS slowing us down.



root@pve:~# smartctl -a /dev/nvme0n1
smartctl 7.4 2024-10-15 r5620 [x86_64-linux-6.14.8-2-pve] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number: MK000800KWWFE
Serial Number: EJ03N4235I0504J2N
Firmware Version: HPK1
PCI Vendor ID: 0x1c5c
PCI Vendor Subsystem ID: 0x1590
IEEE OUI Identifier: 0xace42e
Total NVM Capacity: 800,166,076,416 [800 GB]
Unallocated NVM Capacity: 0
Controller ID: 0
NVMe Version: 1.3
Number of Namespaces: 16
Namespace 1 Size/Capacity: 800,166,076,416 [800 GB]
Namespace 1 Formatted LBA Size: 512
Namespace 1 IEEE EUI-64: ace42e 000560953b
Local Time is: Mon Nov 17 15:40:28 2025 CET
Firmware Updates (0x14): 2 Slots, no Reset required
Optional Admin Commands (0x005e): Format Frmw_DL NS_Mngmt Self_Test MI_Snd/Rec
Optional NVM Commands (0x005f): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x0e): Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg
Maximum Data Transfer Size: 64 Pages
Warning Comp. Temp. Threshold: 65 Celsius
Critical Comp. Temp. Threshold: 68 Celsius
Namespace 1 Features (0x04): Dea/Unw_Error

Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 11.00W 0.00W - 0 0 0 0 30000 30000
1 + 11.00W 0.00W - 1 1 1 1 30000 30000
2 + 9.00W 0.00W - 2 2 2 2 30000 30000
3 + 9.00W 0.00W - 2 2 2 2 30000 30000
4 - 6.00W - - 3 3 3 3 30000 30000

Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 - 512 0 2
1 - 4096 0 0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 35 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 0%
Data Units Read: 1,056,885 [541 GB]
Data Units Written: 1,966,346 [1.00 TB]
Host Read Commands: 10,896,589
Host Write Commands: 22,115,094
Controller Busy Time: 43
Power Cycles: 1
Power On Hours: 147
Unsafe Shutdowns: 0
Media and Data Integrity Errors: 0
Error Information Log Entries: 4
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 2: 43 Celsius
Temperature Sensor 4: 59 Celsius

Error Information (NVMe Log 0x01, 16 of 256 entries)
Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message
0 4 0 0xf010 0x4004 0x004 0 1 - Invalid Field in Command
1 3 0 0xa016 0x4004 0x004 0 1 - Invalid Field in Command
2 2 0 0xd011 0x4004 0x004 0 1 - Invalid Field in Command
3 1 0 0x001c 0x4004 0x028 0 0 - Invalid Field in Command

Read Self-test Log failed: Invalid Field in Command (0x2002)
 
Habe die Platte auf LVM-Thin umformatiert. Quasi keine Verbesserungen auf dem Server.. Any Ideas?

Ich müsste jetzt ja eigentlich fast die Host-Performance bekommen.

1763396920014.png
 
Hi garbiel, thank your for your ideas!


the Server ist a HPE DL380 Gen 10.

In my Debian VM I get
  • IOPS: ~162,000
  • Bandwidth: ~633 MiB/s
  • Latency: avg ~4.5 µs
  • Device util: ~90%

the WindowsVM on the same disk with
Code:
diskspd.exe -b4K -r -w0 -o32 -t4 -d30 C:\testfile.dat
  • IOPS: ~8,400
  • Bandwidth: ~33 MiB/s
  • CPU usage: 4 vCPU cores saturated at ~100% (kernel mode)

So it must be something within Windows Server 2025?
 
You are using "host" for your VM? On some CPU's this slows down Windows VM's quite a lot due to VBS and other things. Could you try with x86-64-v4?
 
  • Like
Reactions: _gabriel
After installing the 2k22 Server I checked the differences between the two systems.

Switching the Processor from 'host' to 'x86-64-v2-AES' solved the IO-Issues for the 2k25 Server. The System runs as it is supposed to.

1763429232415.png
 
Last edited:
depending on your actual processor you should set it to x86-64-v3 or x86-64-v4.
it exposes more processorflags/features to the OS.

im usually running them with v3, as my processors dont support AVX512.

the xeon gold 5118 supports AVX-512, so you could go with v4.
dont forget to set the +aes flag.
 
Last edited: