Really slow NIC speeds

cdsJerry

Renowned Member
Sep 12, 2011
204
8
83
I have some Windows Server 2019 VMs built on Proxmox 6.1-3. I have NICs in them configured as VirtIO (paravirtualized) per recommendations. It's connected to 1GB router. However the speed on the NIC is really slow. It varies from 0 to 5mb, most of the time around 1.5-2MB/s. Why so slow? The PVE shows a delay of .15%. There's plenty of room on the network for faster traffic.

What needs to be done to get a reasonable speed out of this VM?
 
I already installed them.

Device Manager confirms the Network adapters are Red Hat VirtIO Ethernet Adapters.

So far it's taken almost 2 hours to copy 24GB over the NIC... and it's still not finished. It says there's another 5GB to go.
 
Based on almost 2 hours to copy 24GB / another 5GB to go, your WAN speed is (more or less) = 20 mbps.

- Try this i have some experience with Windows Server and by default applied this in all install that i made:

https://support.microsoft.com/en-us...-chimney-offload-receive-side-scaling-and-net

In Windows Server command prompt:
netsh int tcp set global chimney=disabled
netsh int tcp set global autotuning=disabled
netsh int tcp set global rss=disabled

Restart the Windows Server VM and test again.
 
So if I understand this correctly, Windows is passing the workload to the NIC and in this case, the virtual NIC isn't doing a very good job of it. So we run the commands to place the workload on the CPU instead of the NIC card. Is that correct? Indeed when I run "netsh int tcp show global " it does show it enabled. However...

I wonder if it's different in Server 2019? When I run the "netsh int tcp set global chimney=disabled " command from the Powershell I get an error.
Code:
'chimney' is not a valid argument for this command.
The syntax supplied for this command is not valid. Check help for the correct syntax.

Usage: set global [[rss=]disabled|enabled|default]
             [[autotuninglevel=]
                disabled|highlyrestricted|restricted|normal|experimental]
             [[congestionprovider=]none|ctcp|default]
             [[ecncapability=]disabled|enabled|default]
             [[timestamps=]disabled|enabled|default]
             [[initialrto=]<300-3000>]
             [[rsc=]disabled|enabled|default]
             [[nonsackrttresiliency=]disabled|enabled|default]
             [[maxsynretransmissions=]<2-8>]
             [[fastopen=]disabled|enabled|default]
             [[fastopenfallback=]disabled|enabled|default]
             [[hystart=]disabled|enabled|default]
             [[pacingprofile=]off|initialwindow|slowstart|always|default]

Parameters:

    Tag           Value
    rss             - One of the following values:
                      disabled: Disable receive-side scaling.
                      enabled : Enable receive-side scaling.
                      default : Restore receive-side scaling state to
                          the system default.
    autotuninglevel - One of the following values:
                      disabled: Fix the receive window at its default
                          value.
                      highlyrestricted: Allow the receive window to
                          grow beyond its default value, but do so
                          very conservatively.
                      restricted: Allow the receive window to grow
                          beyond its default value, but limit such
                          growth in some scenarios.
                      normal: Allow the receive window to grow to
                          accommodate almost all scenarios.
                      experimental: Allow the receive window to grow
                          to accommodate extreme scenarios.
    congestionprovider - This parameter is deprecated. Please use
                         netsh int tcp set supplemental instead.
    ecncapability   - Enable/disable ECN Capability.
                      default : Restore state to the system default.
    timestamps      - Enable/disable RFC 1323 timestamps.
                      default: Restore state to the system default.
    initialrto      - Connect (SYN) retransmit time (in ms). default: 3000.
    rsc             - Enable/disable receive segment coalescing.
                      default: Restore state to the system default.
    nonsackrttresiliency - Enable/disable rtt resiliency for non sack
                      clients. default: disabled.
    maxsynretransmissions - Connect retry attempts using SYN packets.
                      default: 2.
    fastopen        - Enable/disable TCP Fast Open.
                      default: Restore state to the system default.
    fastopenfallback - Enable/disable TCP Fast Open fallback.
                      default: Restore state to the system default.
    hystart         - Enable/disable the HyStart slow start algorithm.
                      default: Restore state to the system default.
    pacingprofile   - Set the periods during which pacing is enabled.
                      One of the following values:
                      off: Never pace.
                      initialwindow: Pace the initial congestion window.
                      slowstart: Pace only during slow start.
                      always: Always pace.
                      default: off.


Remarks: Sets TCP parameters that affect all connections
.

Example:

       set global rss=enabled autotuninglevel=normal
 
Last edited:
TCP Chimney Offload:
  • Is a networking technology that helps transfer the workload from the CPU to a network adapter during network data transfer.
Disabled = Will not pass CPU workload to Network Adapter.​

Window Auto-Tuning:
  • Feature is enabled by default and makes data transfers over networks more efficient.
  • But if your network uses an old router or your firewall software does not support this feature.
  • Then you may experience slow data transfers or even loss of connectivity.
Disabled = Will not try to adjust data transfer window.​
 
I found instructions for turning it off on the NIC in the configure/advanced section. On my virtual NICs it appears to be called "Receive Side Scaling". I disabled it but no change. Data transfer still crawls.
 
Is this problem limited to the VirtIO drivers? If I tell Windows its a Realtek driver or something would that be better? I know Windows doesn't like it when you change hardware around too much so I'd sorta like to have an idea if that will fix the problem before I pull the plug on it.
 
I had this issue and I tried installing the VirtIO Drivers and could never get them to work. Then I changed the driver from VirtIO to e1000 and the rate is now acceptable it's still not optimum but it's working much better.

Hope this helps,
Michael
 

Attachments

  • Nic_Setup.png
    Nic_Setup.png
    21.4 KB · Views: 41
I had this issue and I tried installing the VirtIO Drivers and could never get them to work. Then I changed the driver from VirtIO to e1000 and the rate is now acceptable it's still not optimum but it's working much better.

Hope this helps,
Michael

See, that's what I'm wondering as well. On my OLD Proxmox machine (version 3.4-6.. don't hate me) I ran everything with the e1000 NIC. But as I was building these new servers on the current version of Proxmox I kept reading that I should install the VirtIO cards instead because they worked better with Proxmox. But I'm not experiencing that myself.

Another glitch I'm running into is that the Guest Agent doesn't run on most of the VMs. That's _probably_ not related to the NIC issue but I thought I'd mention it just in case it's connected and I just didn't realize it.
 
I have never used the guest agent, I always have had good luck with the VMs running natively. The config I posted earlier is the way I configure all of the VMs.

Let me know if you need anything else.
 
Actually I'm starting to wonder if it's either the physical NIC on the machine or the way Proxmox talks to the Physical NIC. All my VMs are showing super limited speeds. And if I'm doing something such as cloning a VM, the other machines nearly stop responding.

Maybe the problem isn't with the driver in the VM. Maybe it's the host or the physical NIC. Not sure how to test those.
 
After running the
ethtool -K eno1 tso off
Command it does go slightly faster, but not much. Still between 8-10Mb/s
 
After running the
ethtool -K eno1 tso off
Command it does go slightly faster, but not much. Still between 8-10Mb/s
Interesting, so x5 faster, but still not exactly the speed you expect.

What happens with:
ethtool -K eno1 gso off gro off tso off
?

You should also try this setting on eno2
 
No change.
hmm, that's too bad, so no quick wins with ethtool so far.
I'm not familiar with this particular card and perhaps the system it's in. I assume its integrated in a Dell or HP as I read a lot of posts on the internet and the TG3 driver is used (from kernel). (you can check with: ethtool -i eno1)
With ethtool you can disable a lot more in terms of hardware offloading, but I'm not sure that't the real cause.
It still could be many things, so my advise would be to search for clues:
- is mainboard firmware (BIOS) up-to-date?
- can the card's firmware be upgraded?
- can the nic somehow be managed from within Bios?
- what tells dmesg? tg3 messages, errors, firmware messages)
- perhaps that kernel parameters for booting can help (aspm, nomsi)
Perhaps analyzing network traffic with tcp dump, wireshark, iperf can help.
This all can be a deep dive and time consuming, but can also be a great learning experience.
Maybe you are in luck and someone on this forum has your card with a working config to compare with yours.
 
hmm, that's too bad, so no quick wins with ethtool so far.
I'm not familiar with this particular card and perhaps the system it's in. I assume its integrated in a Dell or HP as I read a lot of posts on the internet and the TG3 driver is used (from kernel). (you can check with: ethtool -i eno1)
With ethtool you can disable a lot more in terms of hardware offloading, but I'm not sure that't the real cause.
It still could be many things, so my advise would be to search for clues:
- is mainboard firmware (BIOS) up-to-date?
- can the card's firmware be upgraded?
- can the nic somehow be managed from within Bios?
- what tells dmesg? tg3 messages, errors, firmware messages)
- perhaps that kernel parameters for booting can help (aspm, nomsi)
Perhaps analyzing network traffic with tcp dump, wireshark, iperf can help.
This all can be a deep dive and time consuming, but can also be a great learning experience.
Maybe you are in luck and someone on this forum has your card with a working config to compare with yours.
It seems to be a well known problem with the Broadcom cards, which Dell included in many of their servers. The reports however seem to really focus on Windows based systems and the solutions therefore are Windows command lines. That's not helpful in this case since Windows isn't actually controlling the physical NIC.

I've not seen reports from Linux with this problem so therefore I've not seen solutions. Maybe the cheapest solution is to swap out the physical card for a different brand card. Will Proxmox automatically detect the change if I do that? Will I need to rebuild all the bridges if I do that?

I also find it interesting that the Windows VMs see the NIC as a 100GB/sec card. I wish. LOL Must be the VirtIO driver doing it's thing to try and give unlimited speeds on the NIC.
 
Will Proxmox automatically detect the change if I do that?
These days, most drivers are inside the kernel and when they are, the card is automatically detected.
So in most cases yes.
I don't have experience with buying nics, so I can't advise.
The admin guide doesn't mention limitations.

Will I need to rebuild all the bridges if I do that?
Probably you only need to link the interface name from the new nic to the existing bridge.
Nic names can change, this is described here:
https://pve.proxmox.com/wiki/Network_Configuration
But as always before doing changes, document your config well and backup the systems to be sure.

I also find it interesting that the Windows VMs see the NIC as a 100GB/sec card. I wish. LOL Must be the VirtIO driver doing it's thing to try and give unlimited speeds on the NIC.
Yep, that's virtio. Limitations can be in different places though ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!