Inter-VLAN Bottlenecking with i225 NIC

PLKMafia

New Member
Jan 17, 2024
8
1
1
Hi all,

I am currently running Proxmox 8.2.7. It is installed on an HP Elitedesk 800 G6 with an Intel i225 proprietary NIC installed. In all my VMs, I am using the VirtIO NIC.

When running iperf3 tests or SMB & SFTP transfers between VMs on the same VLAN, I see speeds of ~30 Gb/s.
Transfers between VMs on different VLANs result in a bottleneck of around 500-600 Mb/s. It is important to note that I have a 2.5 Gb/s capable switch, and my router is as well.

I know this problem resides with Proxmox, because transfers from the Proxmox host itself to another 2.5 Gb/s -capable machine on the network will actually saturate the NIC with 0 retries in iperf3.

The problem only exists when transferring from a VM within Proxmox. I have my VLANs configured on the Proxmox host using Linux bridges.

Is there anybody else out there with this problem? I know I have a common NIC and a popular homelab PC with the Elitedesk. I have located another forum post where others are discussing the issue from last year. It is linked: here.

Any feedback would be great. I've put in hours of work investigating the issue to no avail. I've tinkered with ethtool settings, different virtual NICs, you name it. I fear it is a kernel issue.

Thank you
 
How are you routing between the VLANs? Are you using the host as a router or do you have an external one?
 
So the OPNsense is actually routing the inter-VLAN traffic? Then the bottleneck is very likely located there.

To make sure could you post:
  • Configuration of both VMs (qm config)
  • Network Configuration and routes of the host (/etc/network/interfaces and ip r)
  • Network configuration of both VMs (ip a / ip r)
 
QM Config for VM 100
root@pve:~# qm config 100
agent: 1
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
efidisk0: vmdisks:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide2: local:iso/ubuntu-24.04-live-server-amd64.iso,media=cdrom,size=2690412K
machine: q35
memory: 8192
meta: creation-qemu=8.1.5,ctime=1719023495
name: ubuntumediaserver
net0: virtio=BC:24:11:88:C9:9B,bridge=vmbr0,tag=91
numa: 0
ostype: l26
scsi0: vmdisks:vm-100-disk-1,discard=on,iothread=1,size=50G
scsihw: virtio-scsi-single
smbios1: uuid=5428df82-ef62-41fc-a1a8-afe98a5a282d
sockets: 1
usb0: host=152d:1561
vmgenid: be1135f6-8df9-4a81-86d6-1c49bbf703cb
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
QM Config for VM 104
root@pve:~# qm config 104
agent: 1
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
efidisk0: local-lvm:vm-104-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide2: local:iso/ubuntu-24.04-live-server-amd64.iso,media=cdrom,size=2690412K
machine: q35
memory: 8192
meta: creation-qemu=9.0.0,ctime=1722040073
name: ubuntuserver
net0: virtio=BC:24:11:3F:C6:C7,bridge=vmbr0,tag=92
numa: 0
ostype: l26
scsi0: vmdisks:vm-104-disk-0,discard=on,iothread=1,size=250G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=5c5d28ce-88c7-4764-90e1-ed703df39e9e
sockets: 1
usb0: host=0781:55ae
usb1: host=0781:558c
vmgenid: 7920c754-de35-4cd2-bb2e-98b592ebd6f7
-----------------------------------------------------------------------------------------------------------------------------------------------------
Network Config for Host
auto lo
iface lo inet loopback

iface enp3s0 inet manual

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet manual
bridge-ports enp3s0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 91,92

iface wlp0s20f3 inet manual

auto vmbr0.92
iface vmbr0.92 inet static
address 10.34.92.250/24
gateway 10.34.92.1
#Management

iface vmbr0.92 inet6 static
address [Redacted Global IPv6]/64
gateway[Redacted Global IPv6]

auto vmbr0.91
iface vmbr0.91 inet manual
#General

source /etc/network/interfaces.d/*
--------------------------------------------------------------------------------------------------------------------------------------------------------------
Network Routes for Host
root@pve:~# ip r
default via 10.34.92.1 dev vmbr0.92 proto kernel onlink
10.34.92.0/24 dev vmbr0.92 proto kernel scope link src 10.34.92.250
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Network Config for VM 100
admin@ubuntumediaserver:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp6s18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether bc:24:11:88:c9:9b brd ff:ff:ff:ff:ff:ff
inet 10.34.91.252/24 brd 10.34.91.255 scope global enp6s18
valid_lft forever preferred_lft forever

admin@ubuntumediaserver:~$ ip r
default via 10.34.91.1 dev enp6s18 proto static
10.34.91.0/24 dev enp6s18 proto kernel scope link src 10.34.91.252
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Network Config for VM 104
admin@ubuntuserver:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp6s18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether bc:24:11:3f:c6:c7 brd ff:ff:ff:ff:ff:ff
inet 10.34.92.252/24 brd 10.34.92.255 scope global enp6s18
valid_lft forever preferred_lft forever
inet6 [Redacted Global IPv6]/64 scope global temporary dynamic
valid_lft 86196sec preferred_lft 14196sec
inet6 [Redacted IPv6]/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86196sec preferred_lft 14196sec
inet6 fe80::be24:11ff:fe3f:c6c7/64 scope link
valid_lft forever preferred_lft forever

admin@ubuntuserver:~$ ip r
default via 10.34.92.1 dev enp6s18 proto static
10.34.92.0/24 dev enp6s18 proto kernel scope link src 10.34.92.252
---------------------------------------------------------------------------------------------------------------------------------------------------------------

I hope this covers all the relevant information. For a quick reference since I know it can be confusing:
Ubuntu Media Server = 10.34.91.252 [VLAN 91]
Ubuntu Server = 10.34.92.252 [VLAN 92]

My OPNsense box is doing the inter-vlan routing. I suspected that Proxmox may be the issue since the Proxmox host has no speed issues when transferring to the OPNsense box itself. I'm quite stuck on this one.

Thank you for your time!
 
It's very likely that OPNsense is the bottleneck here, since the traffic flows through the OPNsense firewall. Since traffic in the same VLAN should get handled by the bridge on the host (and is fast), traffic between different VLANs is handled by OPNsense (and is slow) so it is quite likely that the bottleneck lies there.

You could try this by making the host route between the different VLANs (and not OPNsense) and see if the speed is good then.
 
Thank you for your reply. Much appreciated. I haven’t tried this method but I am more than willing to give it a go.

Would the best way be to create ip routes on each VM to one another using the host as the default gateway?
 
Would the best way be to create ip routes on each VM to one another using the host as the default gateway?
Yes, give the host an IP on the respective VLAN bridges, activate IP forwarding, and use the host as a gateway inside the VMs.
 
Yes, give the host an IP on the respective VLAN bridges, activate IP forwarding, and use the host as a gateway inside the VMs.
I just wanted to report back here and let you know you were correct.

Rather than set up routing/forwarding on the host, I set up an alternate router in place of OPNsense.

The bottleneck completely disappeared, with box the Proxmox host and both VMs achieving full saturation.

Thank you for your help
 
  • Like
Reactions: shanreich

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!