vlan aware bridge and mtu 9000

Hello

I am trying to solve an issue that I can't reach my storage when using a vland aware bridge (and mtu 9000).

My config looks like this:
iface enp94s0f1np1 inet manual auto vmbr1 iface vmbr1 inet manual bridge-ports enp94s0f1np1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #mtu 9000 #pre-up echo 56 > /sys/class/net/enp94s0f1np1/device/sriov_numvfs auto vmbr1.55 iface vmbr1.55 inet static address 172.32.0.20/20 mtu 1500 auto vmbr1.60 iface vmbr1.60 inet static address 172.33.0.20/20 mtu 1500 auto vmbr1.70 iface vmbr1.70 inet static address 172.34.0.20/20 mtu 1500 auto vmbr1.80 iface vmbr1.80 inet static address 172.35.0.20/20 mtu 1500 auto vmbr1.90 iface vmbr1.90 inet static address 172.36.0.20/20 mtu 1500 auto vmbr1.200 iface vmbr1.200 inet static address 172.31.0.20/20 mtu 9000


When just using enp94s0f1np1 with vlan 200 it works just great and I can reach my storage on the 172.31.0.0/20 network.

What could I be missing?

My plan is to use my second nic enp94s0f1np1 for NFS traffic over vlan 200 but also different set up internal networks for internal communication over both virtual and physical hosts.
 
Last edited:
Digging a bit extra:
72: vmbr1.200@vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 inet 172.31.0.20/20 scope global vmbr1.200 valid_lft forever preferred_lft forever

ip link set dev vmbr1.200 mtu 1500
works.

ip link set dev vmbr1.200 mtu 9000 RTNETLINK answers: Numerical result out of range
Not working.


This was the solution to the MTU issue, but I still can't ping over the network.
root@kg-virt01:~# ip link set dev enp94s0f1np1 mtu 9000 root@kg-virt01:~# ip link set dev vmbr1.200 mtu 9000
 
Last edited:
  • Like
Reactions: vesalius
Like @jaminmc suggested with linux bridges to get 9000 mtu working I had to add it to all of the physical interfaces, bonds, bridges and vlans in the chain of transmission. In your case that's enp94s0f1np1->vmbr1->vmbr1.200 so try again with:
Code:
iface enp94s0f1np1 inet manual
    mtu 9000

and uncomment the mtu 9000 line in vmbr1. Then vmbr1.200 should work more reliably.

I will add that other than a dedicated storage network jumbo mtu is way more trouble than it's worth. Different devices and switches handle things weirdly. some standard stuff goes off the rails randomly.
 
Last edited:
  • Like
Reactions: jaminmc
Like @jaminmc suggested with linux bridges to get 9000 mtu working I had to add it to all of the physical interfaces, bonds, bridges and vlans in the chain of transmission. In your case that's enp94s0f1np1->vmbr1->vmbr1.200 so try again with:
Code:
iface enp94s0f1np1 inet manual
    mtu 9000

and uncomment the mtu 9000 line in vmbr1. Then vmbr1.200 should work more reliably.

I will add that other than a dedicated storage network jumbo mtu is way more trouble than it's worth. Different devices and switches handle things weirdly. some standard stuff goes off the rails randomly.

I agree, I should have made some misstake.
Now I have switched to openvswitch and have defined everything correct and it works great so far. Thanks!
 
Hi,


In your case that's enp94s0f1np1->vmbr1->vmbr1.200
No. vmbr1.200 is a vlan interface, so if vmbr1 use 9000 mtu, then vmbr1.xxx can not use 9000, because for the any vlan interface you must take in account the vlan tagg heder(4096).

So in your case mtu will be, 9000-4096.
 
Hi,



No. vmbr1.200 is a vlan interface, so if vmbr1 use 9000 mtu, then vmbr1.xxx can not use 9000, because for the any vlan interface you must take in account the vlan tagg heder(4096).

So in your case mtu will be, 9000-4096.
Not quite right either by everything I have read (nothing close to subtracting 4096). The vlan tag only adds 4 bytes (Q-in-Q would add 8 bytes). vxlan is 50 bytes. see the article below. So I was incorrect and he would need to increase the MTU down the chain and on his switch to accommodate the extra bytes. So max the mtu at the switch, most Cisco I've seen are 9216, my brocade switch maxes out 10,200. Put the proxmox host physical adapter to 9000 plus overhead and the bridge/vlan to 9000. It gets to be a mess quickly.

https://www.networkworld.com/article/2224654/mtu-size-issues.html

@Veidit quote below from the article above better states some of what I was trying to convey.

The key concept to keep in mind is that all the network devices along the communication path must support jumbo frames. Jumbo frames need to be configured to work on the ingress and egress interface of each device along the end-to-end transmission path. Furthermore, all devices in the topology must also agree on the maximum jumbo frame size. If there are devices along the transmission path that have varying frame sizes, then you can end up with fragmentation problems. Also, if a device along the path does not support jumbo frames and it receives one, it will drop it.
 
Last edited:
Not quite right either by everything I have read (nothing close to subtracting 4096). The vlan tag only adds 4 bytes

4 bytes vlan tagg = 4096 ....

So I was incorrect and he would need to increase the MTU down the chain and on his switch to accommodate the extra bytes
... this is one variant. The second one is to decrease vlan IP MTU, with 4096.

So max the mtu at the switch, most Cisco I've seen are 9216, my brocade switch maxes out 10,200.

L2MTU. What you setup as MTU in linux is IP MTU. L2MTU must be > IP MTU.

Good luck / Bafta!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!