Proxmox VE 8.4 Extremely Poor Windows 11 VM Network Performance

anon314159

New Member
Oct 27, 2025
2
0
1
Proxmox Team,

Multiple Windows 11 VM's hosted on on top of PVE 8.4 backed of kernel version 6.8.12-9 have been experience horrendous receive-side network performance (ex.in the order of hundreds of kilobytes per second) regardless of the device type (Realtek, E1000, VirtIO, etc.) and CPU modes (host-based). This appears to be driver related, as no matter which drivers or combination of VM tap settings (lro, checksums, offload, rss, rsc, mtu, etc.), I cannot achieve practicable receive-side data rates and other non-windows based virtual machines are not experiencing this issue. Sending data works perfectly fine using SMB, NFS, and iperf3 but for some odd reason the performance never peaks above a few megabytes a second when receiving data regardless of the different NIC types, drivers and settings.

My hosts Networking consists of dual 100GbE Broadcom controllers (VM network and Ceph Storage Backend) and simple data transmission tests using iperf3 yield normal results within performance expectations between the physical hosts and other Linux based VM's appear fine when using the VirtIO network interface.

I could not find any documentation, forums, or articles relating to poor network performance regarding Windows based VM's and Proxmox VE. Any additional troubleshooting steps or information you can provide is greatly appreciated, thanks again!
 
Last edited:
Not sure if this helps but you could try to enable multiqueue in you vm's nic (advanced options).
Use virtio as your model and set the queue higher - or max to your vm cpu count.

By default the virtio nic (on Windows?) is single threaded, under normal circumstances that's not an issue but with 100G you may have to add more threads to achieve higher bandwith / throughput. Not sure if this feature is stable or expermental.

Otherwise i would (re)check the actual usable mtu in your network environment.

Windows
Code:
ping <ip address> -f -l 1500
Reduce / increase mtu in ping command by 2 till the packets don't fragment anymore.
 
Not sure if this helps but you could try to enable multiqueue in you vm's nic (advanced options).
Use virtio as your model and set the queue higher - or max to your vm cpu count.

By default the virtio nic (on Windows?) is single threaded, under normal circumstances that's not an issue but with 100G you may have to add more threads to achieve higher bandwith / throughput. Not sure if this feature is stable or expermental.

Otherwise i would (re)check the actual usable mtu in your network environment.

Windows
Code:
ping <ip address> -f -l 1500
Reduce / increase mtu in ping command by 2 till the packets don't fragment anymore.
Thanks for the information and advice. I tried your recommendations and determined the MTU appears correct (1500 minus the overhead and header info amounts to appropriate value of 1472) and ICMP requests are sending/receiving correctly using the default MTU settings. One interesting thing to note is the exact problem manifests itself on Windows Server 2019, 2022, and Windows 10 VM's that are missing the virtio guest extensions and once the correct drivers are installed (netkvm.sys) the virtual NIC's behave properly and I am able to achieve the expected throughput/bandwidth, However, the same cannot be said for Windows 11 VM's and I suspect there's something up with the NDIS version, driver, and networking stack. My suspicion is that Microsoft messed with the RSS/RSC/LRO behavior internal to NDIS and now the network stack expects certain timing that is not readily available or enabled within Proxmox VE 8.4
 
Last edited: