Slow throughput to VM through bridged 10GBe NIC

Aluveitie

Member
Sep 21, 2022
24
5
8
I am rather new to Proxmox, setting up a virtualized NAS for my home lab.

I'm running TrueNAS Core in a VM. Since I cannot pass through only a single port of the dual port NIC I tried a simple bridge.
(DHCP/IPv4 access over the bridge works, DHCPv6 not but that is another story...).

With iperf3 I get almost 10 GBit/s to Proxmox, but I only reach about 3.5 GBit/s over the bridge to TrueNAS. About 4.5 GBit/s using an OpenVSwitch bridge.
(eno1 is used for accessing Proxmox, eno2 should be used to connect the TrueNAS VM)

Linux bridge setup:
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet dhcp
iface eno1 inet6 dhcp

allow-vmbr0 eno2
auto eno2
iface eno2 inet manual
        mtu 9000

auto vmbr0
iface vmbr0 inet manual
        bridge-ports eno2
        bridge-stp off
        bridge-fd 0
        mtu 9000

OpenVS setup:
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet dhcp
iface eno1 inet6 dhcp

auto eno2
allow-vmbr0 eno2
iface eno2 inet manual
        ovs_type OVSPort
        ovs_bridge vmbr0
        ovs_mtu 9000

iface eno2 inet6 manual

allow-ovs vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge
        ovs_ports eno2
        ovs_mtu 9000

They system is an 8-Core Epyc Rome, the TrueNAS VM (currently mostly empty) has 8 vCPUs and 12 GB RAM assigned.

About the interface:
Code:
root@server:/etc/network# ethtool eno2
Settings for eno2:
        Supported ports: [ FIBRE ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: No
        Supported FEC modes: Not reported
        Advertised link modes:  10000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Full
        Auto-negotiation: off
        Port: Direct Attach Copper
        PHYAD: 1
        Transceiver: internal
        Supports Wake-on: g
        Wake-on: g
        Current message level: 0x00000000 (0)
                             
        Link detected: yes
 
Last edited:
Are you sure jumboframes are working? MTU is everywhere set to 9000, also on the switch, all clients and inside TrueNAS? Switching from MTU 1500 to 9000 increased the 10Gbit NIC performance here from around 4Gbit to 10 Gbit. I guess the singlethreaded performance of the CPU was bottlenecking and the CPU could't keep up handling packets. With jumboframes there are less but bigger packets, so the CPU isn't hit that hard.
You might also want to set multiqueue for the virtio NIC, so the incoming queue can use multiple CPU threads.
 
Last edited:
  • Like
Reactions: vesalius

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!