Windows Server 2016 with 10 Gbit Networking working 1 Gbit

Jul 29, 2023
3
0
1
Hi,

I recently migrated a Windows Server 2016 from VMWare to Proxmox VE 8.2.7 using the import tool. I’ve installed the VirtIO drivers, and everything seems to be working fine, except for one issue: the VM is only running at 1 Gbps, despite being connected to a 10 Gbps network card. I'm using a VirtIO network card with a VLAN tag configured.

To troubleshoot, I created a Debian container on the same node and another node connected to the same switch, both at 10 Gbps. I installed iperf on both the container and the Windows Server VM to test the connection. However, the results showed no improvement. I also tried using an older version of the VirtIO driver, but that didn’t solve the problem. Additionally, I cleared the Windows Server's cache using ipconfig /flushdns and reset the winsock stack with netsh winsock reset, but the issue persisted.

One strange thing I encountered was when I switched to the VMWare vmxnet3 card and then back to VirtIO. After resetting the VirtIO card, I received the following error, and the VM's network card stopped functioning:

Parameter verification failed. (400)
net0: hotplug problem - VM 100 qmp command 'netdev_add' failed - vhost-net requested but could not be initialized


After rebooting the VM, the VirtIO card started working again.


This are the results of the test between the 2 CT
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.10 GBytes 9.41 Gbits/sec 78 1.10 MBytes
[ 5] 1.00-2.00 sec 1.09 GBytes 9.38 Gbits/sec 41 1.42 MBytes
[ 5] 2.00-3.00 sec 1.09 GBytes 9.40 Gbits/sec 54 1.44 MBytes
[ 5] 3.00-4.00 sec 1.09 GBytes 9.39 Gbits/sec 0 1.45 MBytes
[ 5] 4.00-5.00 sec 1.09 GBytes 9.39 Gbits/sec 42 1.48 MBytes
[ 5] 5.00-6.00 sec 1.09 GBytes 9.38 Gbits/sec 32 1.50 MBytes
[ 5] 6.00-7.00 sec 1.09 GBytes 9.40 Gbits/sec 0 1.51 MBytes
[ 5] 7.00-8.00 sec 1.09 GBytes 9.38 Gbits/sec 164 1.51 MBytes
[ 5] 8.00-9.00 sec 1.09 GBytes 9.40 Gbits/sec 0 1.51 MBytes
[ 5] 9.00-10.00 sec 1.09 GBytes 9.38 Gbits/sec 8 1.52 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.9 GBytes 9.39 Gbits/sec 419 sender
[ 5] 0.00-10.00 sec 10.9 GBytes 9.39 Gbits/sec receiver

This are the result of the test betwenn the VM and the CT on the same node
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.02 sec 105 MBytes 864 Mbits/sec
[ 5] 1.02-2.00 sec 93.6 MBytes 804 Mbits/sec
[ 5] 2.00-3.02 sec 99.5 MBytes 822 Mbits/sec
[ 5] 3.02-4.00 sec 94.4 MBytes 804 Mbits/sec
[ 5] 4.00-5.02 sec 98.1 MBytes 811 Mbits/sec
[ 5] 5.02-6.02 sec 96.0 MBytes 805 Mbits/sec
[ 5] 6.02-7.02 sec 96.1 MBytes 806 Mbits/sec
[ 5] 7.02-8.02 sec 96.0 MBytes 805 Mbits/sec
[ 5] 8.02-9.02 sec 92.6 MBytes 777 Mbits/sec
[ 5] 9.02-10.02 sec 95.9 MBytes 804 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.02 sec 968 MBytes 810 Mbits/sec sender
[ 5] 0.00-10.07 sec 967 MBytes 806 Mbits/sec receiver

And this the output of the ethtool

Settings for ens5f0:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Auto-negotiation: off
Port: FIBRE
PHYAD: 0
Transceiver: internal
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link

Thanks in advance for any help!
 
Here's the VM config

agent: 1
bios: ovmf
boot: order=sata1
cores: 35
cpu: x86-64-v2
efidisk0: vm-ssd:vm-100-disk-0,size=1M
machine: pc-i440fx-9.0
memory: 81920
meta: creation-qemu=9.0.2,ctime=1728077768
name: mwserver2016
net0: virtio=MAC-ADDRESS,bridge=vmbr1,tag=2
numa: 0
ostype: win10
parent: Pre-OldGuest
sata0: local:iso/virtio-win-0.1.225.iso,media=cdrom,size=519590K (at the beginning virtio installed were the latest)
sata1: vm-ssd:vm-100-disk-1,size=1782580M
scsihw: virtio-scsi-single
smbios1: uuid=ID
sockets: 2
vmgenid: ID
 
A 35Core / 2 Socket VM with this Limited CPU Type is very Strange.
Is it possible to Configure the VM with normal CPU Count (4-8 Cores) best with CPU Type host? Then Test again.
 
I need to plan this change carefully because the server is in production, and changing the hardware may cause issues with some software licenses.
I think that this weekend I can to this test.
Thanks for the advice!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!