[SOLVED] Slow 10gb Networking with PROXMOX 7.2

HgTxRx

New Member
Aug 4, 2022
6
0
1
I'm currently moving my homelab from the free version of ESXi to PROXMOX but I'm getting very slow 10gb networking.

The hardware is 2 identical old Dell T320 with solaflare sfp+ 10gb fibre network cards on DAC to a Unifi switch and a bare metal TrueNAS Scale server.

With ESXi 6.7 I was consistently getting 14.1gbps between VMs on the same server and 9.0mbps between VMs on different servers.
  • On proxmox using iperf3 in a ubuntu 20.04 VM I'm getting about 4gbps to the bare metal TrueNAS Scale server
  • So I installed iperf3 on the proxmox host and can get 6.2gbps to the bare metal TrueNAS Scale server
  • As a verification of the hardware I installed...
    • TrueNAS Scale and can achieve 9.2gbps to the bare metal TrueNAS Scale server
    • XCP-NG 8.2.1 and can achieve 9.0gbps to the bare metal TrueNAS Scale server
    • ESXI 6.7 and can achieve 9.4gbps to the bare metal TrueNAS Scale server
  • I have switched to the Open vSwitch and re-tested the proxmox host and got 5.9gbps
I have tried different setting for the proxmox VMs but at this stage I need to address the poor performance from the proxmox host first.

I've searched the various forums but found nothing that helps

What am I missing?


For reference - this is the NIC settings in proxmox

Code:
Settings for enp8s0f0np0:
    Supported ports: [ FIBRE ]
    Supported link modes:   1000baseT/Full
                            1000baseX/Full
                            10000baseCR/Full
                            10000baseSR/Full
                            10000baseLR/Full
    Supported pause frame use: Symmetric Receive-only
    Supports auto-negotiation: No
    Supported FEC modes: Not reported
    Advertised link modes:  Not reported
    Advertised pause frame use: No
    Advertised auto-negotiation: No
    Advertised FEC modes: Not reported
    Link partner advertised link modes:  Not reported
    Link partner advertised pause frame use: No
    Link partner advertised auto-negotiation: No
    Link partner advertised FEC modes: Not reported
    Speed: 10000Mb/s
    Duplex: Full
    Auto-negotiation: off
    Port: FIBRE
    PHYAD: 255
    Transceiver: internal
    Supports Wake-on: g
    Wake-on: d
        Current message level: 0x000020f7 (8439)
                               drv probe link ifdown ifup rx_err tx_err hw
    Link detected: yes
 
Here I also got only around 4Gbit. Bottleneck was the CPU that couldn't handle more packets. Fixed it by using jumboframes, so with the same amount of packets more throughput is possible as the packets are bigger. That why I got nearly the full 10 Gbit between TrueNAS and PVE or TrueNAS and Win11. You can also set multiqueue for each virtio NIC. That way the guest might parallelize receiving the packets so the guest can also handle more packets per second.
 
Last edited:
  • Like
Reactions: HgTxRx
Here I also got only around 4Gbit. Bottleneck was the CPU that couldn't handle more packets. Fixed it by using jumboframes, so with the same amount of packets more throughput is possible as the packets are bigger. That wy I got nearly the full 10 Gbit between TrueNAS and PVE or TrueNAS and Win11. You can also set multiqueue for each virtio NIC. That way the guest might parallelize receiving the packets so the guest can also handle more packets per second.
You're right I haven't switched to 9000 MTU, the testing I performed is all at 1500 MTU so that would help.
It didn't seem that the CPU was bottlenecked from my monitoring and it's the same setup for the tests with TrueNAS, ESXi and XCP-NG so it seems the network in proxmox is not optimized for speed or I've missed something on the setup (which is what I'm hoping for).
 
Some further testing, I reversed the iperf3 testing and made the proxmox host the iperf3 server.
  • From TrueNAS to proxmox gets 9.2gbps
  • From proxmox to TrueNAS is still 4.5gbps as a reference
  • From TrueNAS to a VM in proxmox gets 9.0gbps
  • from a VM in proxmox to TrueNAS gets 4.0gbps
for reference 2 VMs inside proxmox can get get 20.1gbps.

So there is something on the outbound side of proxmox throttling the performance of the host and any VMs it is hosting.

What is there on the outbound I need to change to get better network performance.

i have disabled the firewall at the datacenter, node and VM levels
 
Hello HgTxRx,
did you have same problem that after update Proxmox Host to 7.2-7 get only 4.1 gbs when you run Ipoerf3 from Proxmox Host to your desktop com,puter and in VM got normal speed ?
 
  • Like
Reactions: HgTxRx
Hello HgTxRx,
did you have same problem that after update Proxmox Host to 7.2-7 get only 4.1 gbs when you run Ipoerf3 from Proxmox Host to your desktop com,puter and in VM got normal speed ?
This is a fresh install of 7.2-7, I have been using ESXi 6.7U3 and am converting to proxmox so I don't know if this is a change in the latest release unfortunately.
The rest of my network can all achieve 9.4-9.6gbps in both directions.
 
Hi, for the record, between 2 proxmox nodes, with iperf3 wit default option + mellanox nic connect-x4 && mtu 9000, I'm around 9,76gbit/s.

Also, my cpu are forced to max frequencies with grub option

GRUB_CMDLINE_LINUX="idle=poll intel_idle.max_cstate=0 intel_pstate=disable processor.max_cstate=1"

kernel 5.13.19-2-pve
 
Last edited:
  • Like
Reactions: HgTxRx
Thanks for the prompts, it got me thinking and I have rechecked the BIOS settings (I'd reset them to standard as part of some previous experimenting).

The settings for I/OAT DMA and SR-IOV were disabled - reenabled them on both servers and now I can get 9.1gbps from proxmox to the bare metal TrueNAS server and 8.9gbps to a VM TrueNAS instance. from VM to VM on the same proxmox host I get 22gbps.

This is still all at 1500 mtu, I know I can get the 'missing' 0.5gbps from switching to 9000 mtu but that's a project for later.

This has given me the final happy tick box and I have now converted my entire homelab to proxmox with nested VMs for ESXi and XCP-ng to help import any archived VMs I missed.

Questions for another lifetime.... how did ESXi and XCP-ng achieve the speeds with I/OAT and SR-IOV disabled?

For reference - here's an article on ST-IOV link.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!