[SOLVED] SDN vnet ignores zone MTU on veth link (jumbo frames can't be enabled for inter-node communication)

danilppm

New Member
Mar 21, 2024
2
0
1
My proxmox version is 8.1.4.

I tried using SDN to create a separate VLAN with 9k MTU.
But large packets are dropped.
Underlying network is configured to allow jumbo frames, and I have no troubles when running ping on the host itself.

I figured out that the issue is in veth interfaces that proxmox uses to connect vnet to the main bridge. It seems like it only sets MTU for vnet bridge but not for veth interfaces.
When I tried to set MTU on these veth interfaces manually the issue was solved but if I do any changes to SDN proxmox recreates interfaces and resets the MTU.

Did I miss a configuration somewhere?

For example, here I see that storage bridge has proper MTU from the SDN config but ln_storage and pr_storage don't.

Bash:
$ cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what  
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

iface wlp8s0 inet manual

iface enp10s0 inet manual

iface enp14s0 inet manual

iface enp14s0d1 inet manual
        mtu 9200

auto vmbr0
iface vmbr0 inet static
        address 10.0.0.4/16
        gateway 10.0.0.1
        bridge-ports enp14s0d1
        bridge-stp off
        bridge-fd 0
        mtu 9200
#main

auto vmbr9999
iface vmbr9999 inet static
        address 192.168.255.4/24
        bridge-ports enp10s0
        bridge-stp off
        bridge-fd 0
#management

$ cat /etc/network/interfaces.d/sdn
#version:25

auto ln_storage
iface ln_storage
        link-type veth
        veth-peer-name pr_storage

auto pr_storage
iface pr_storage
        link-type veth
        veth-peer-name ln_storage

auto storage
iface storage
        bridge_ports ln_storage
        bridge_stp off
        bridge_fd 0
        mtu 9200
        alias Storage network

auto vmbr0v2
iface vmbr0v2
        bridge_ports  enp14s0d1.2 pr_storage
        bridge_stp off
        bridge_fd 0

$ sudo cat /etc/pve/sdn/zones.cfg
vlan: mainnet
        bridge vmbr0
        ipam pve

vlan: highmtu
        bridge vmbr0
        ipam pve
        mtu 9200

$ sudo cat /etc/pve/sdn/vnets.cfg
vnet: storage
        zone highmtu
        alias Storage network
        tag 2

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp10s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master vmbr9999 state DOWN group default qlen 1000
    link/ether 70:85:c2:38:b5:81 brd ff:ff:ff:ff:ff:ff
3: enp14s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether e4:1d:38:15:ca:34 brd ff:ff:ff:ff:ff:ff
4: enp14s0d1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9200 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether e4:1d:38:15:ca:35 brd ff:ff:ff:ff:ff:ff
5: wlp8s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 10:f0:05:37:66:30 brd ff:ff:ff:ff:ff:ff
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9200 qdisc noqueue state UP group default qlen 1000
    link/ether e4:1d:38:15:ca:35 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.4/16 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::e61d:38ff:fe15:ca35/64 scope link
       valid_lft forever preferred_lft forever
7: vmbr9999: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 70:85:c2:38:b5:81 brd ff:ff:ff:ff:ff:ff
    inet 192.168.255.4/24 scope global vmbr9999
       valid_lft forever preferred_lft forever
12: ln_storage@pr_storage: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master storage state UP group default qlen 1000
    link/ether be:51:2b:be:1e:c0 brd ff:ff:ff:ff:ff:ff
13: pr_storage@ln_storage: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v2 state UP group default qlen 1000
    link/ether f2:d6:81:43:17:99 brd ff:ff:ff:ff:ff:ff
14: storage: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9200 qdisc noqueue state UP group default qlen 1000
    link/ether be:51:2b:be:1e:c0 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::bc51:2bff:febe:1ec0/64 scope link
       valid_lft forever preferred_lft forever
15: enp14s0d1.2@enp14s0d1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9200 qdisc noqueue master vmbr0v2 state UP group default qlen 1000
    link/ether e4:1d:38:15:ca:35 brd ff:ff:ff:ff:ff:ff
16: vmbr0v2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e4:1d:38:15:ca:35 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e61d:38ff:fe15:ca35/64 scope link
       valid_lft forever preferred_lft forever
17: tap1006i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr1006i0 state UNKNOWN group default qlen 1000
    link/ether a2:01:d0:d4:66:9b brd ff:ff:ff:ff:ff:ff

Command ping -M do 10.0.0.5 -c 1 -s $((9198 - 28)) on the host works fine.
Command ping -M do 10.2.10.1 -c 1 -s $((9000 - 28)) in the VM fails (when VMs are on different nodes), but works fine if I manually configure MTU for ln_storage and pr_storage (on both nodes).

I found a forum thread with a very similar issue from 2 years ago but it claims to be fixed so I guess my issue must be something different: https://forum.proxmox.com/threads/sdn-incorrect-mtu.111954/

Can I fix it via config somehow, or is it a bug in proxmox SDN?
 
Last edited:
I submitted a bug here: https://bugzilla.proxmox.com/show_bug.cgi?id=5324

> as workaround, you can use a vlan-aware bridge for vmbr0, it'll not use veth link interfaces to plug the vnet.

Thank you.
I believe I tried this before and it didn't solve this issue but I probably missed something then.
I tried it again, and it seems to help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!