Very slow nvme speeds on Windows Server 2022 VM

JensJN

New Member
Jan 6, 2025
6
0
1
Hi,

I have set up a Windows Server 2022 VM where I am getting very slow nvme speeds.

As shown below I have tested one of my nvme drives both directly on Proxmox, in an Ubuntu Desktop VM with ZFS, in the Windows VM using PCIE passthrough and in the Windows VM using a virtual ZFS disk from Proxmox.

There seems to be some sort of bottleneck which slows down my disk performance on the Windows VM significantly. And it happens both with PCIE passthrough (Passed the controller through as VFIO) and with a virtual disk.

The benchmark tests are done using fio. The tested nvme drive is a Kingston NV2 1 TB. I have also tested a Samsung 990 PRO with similarly bad results for the Windows VM.

Do you guys have any idea what could be causing this? Or how I can identify the issue?

Proxmox.PNG
proxmox-hardware-png.80281

Proxmox options.PNG
 

Attachments

  • Proxmox hardware.PNG
    Proxmox hardware.PNG
    127.6 KB · Views: 57
  • 1736169907949.png
    1736169907949.png
    86.9 KB · Views: 2
Last edited:
I just tried setting numa to 1 and assigning 8 cores to the VM, but that did not help unfortunately. Been trying a bunch of different stuff but really hitting a wall here.
 
I also just tried setting up a new windows VM, but with Windows 11 instead of Windows Server 2022, and I still have the very slow nvme speeds. So it seems to be something Windows related, since it works okay on Ubuntu Desktop and directly in the Proxmox shell.
 
Set VM Disk "Cache" to Default (None) , Write cache can be the bottleneck, moreover on consumer SSD, even more with QLC SSD like your.
VM Disk "Async IO" to native should help too.
ZFS will wear out this QLC SSD, moreover with SQL, because of write amplication.
 
Last edited:
Thanks for your input! I set VM disk cache to "Default (None)" and Async IO to "Native". Unfortunately it did not fix the problem, I still have way lower performance than expected. See below.
Also I am planning to eventually (If I can get this fixed lol) use PCIE direct passthrough for the drive. Right now I am considering quitting the Windows Server 2022 setup altogether and just going to Ubuntu, as that does not seem to have the same bottleneck. Just annoys me that I can't figure out what the bottleneck is.


Benchmark after changes.png
 
I think both my last screenshot and the benchmark results in the OP show IOPS and bandwidth. Unfortunately they are both underperforming. I am using 4k blocks for all tests.
 
what about bare metal Windows results ?
Try Lvmthin, will be faster than ZFS , as ZFS add big overhead.
First results IOPS missing "k" units.
I also tried the same drive on my other normal windows pc and it works great. Several times better performance. And yes the IOPS are in thousands.

Also I tried pcie pass-through, I guess that should be even faster than lvmthin?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!