Proxmox Team,
Multiple Windows 11 VM's hosted on on top of PVE 8.4 backed of kernel version 6.8.12-9 have been experience horrendous receive-side network performance (ex.in the order of hundreds of kilobytes per second) regardless of the device type (Realtek, E1000, VirtIO, etc.) and CPU modes (host-based). This appears to be driver related, as no matter which drivers or combination of VM tap settings (lro, checksums, offload, rss, rsc, mtu, etc.), I cannot achieve practicable receive-side data rates and other non-windows based virtual machines are not experiencing this issue. Sending data works perfectly fine using SMB, NFS, and iperf3 but for some odd reason the performance never peaks above a few megabytes a second when receiving data regardless of the different NIC types, drivers and settings.
My hosts Networking consists of dual 100GbE Broadcom controllers (VM network and Ceph Storage Backend) and simple data transmission tests using iperf3 yield normal results within performance expectations between the physical hosts and other Linux based VM's appear fine when using the VirtIO network interface.
I could not find any documentation, forums, or articles relating to poor network performance regarding Windows based VM's and Proxmox VE. Any additional troubleshooting steps or information you can provide is greatly appreciated, thanks again!
Multiple Windows 11 VM's hosted on on top of PVE 8.4 backed of kernel version 6.8.12-9 have been experience horrendous receive-side network performance (ex.in the order of hundreds of kilobytes per second) regardless of the device type (Realtek, E1000, VirtIO, etc.) and CPU modes (host-based). This appears to be driver related, as no matter which drivers or combination of VM tap settings (lro, checksums, offload, rss, rsc, mtu, etc.), I cannot achieve practicable receive-side data rates and other non-windows based virtual machines are not experiencing this issue. Sending data works perfectly fine using SMB, NFS, and iperf3 but for some odd reason the performance never peaks above a few megabytes a second when receiving data regardless of the different NIC types, drivers and settings.
My hosts Networking consists of dual 100GbE Broadcom controllers (VM network and Ceph Storage Backend) and simple data transmission tests using iperf3 yield normal results within performance expectations between the physical hosts and other Linux based VM's appear fine when using the VirtIO network interface.
I could not find any documentation, forums, or articles relating to poor network performance regarding Windows based VM's and Proxmox VE. Any additional troubleshooting steps or information you can provide is greatly appreciated, thanks again!
Last edited: