Bridged , bonded and MTU 9000 bytes

Mar 10, 2022
Florence Italy
Is it possible to use a LInux bridged and bonded iface witch MTU size 9000 ?

When i try to set it up and enable mtu on vmbr0 I loose the web Gui on IP4

# my /etc/network/internaces

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

iface eno8303 inet manual
# dns-* options are implemented by the resolvconf package, if installed

iface eno12399np0 inet manual

iface eno12409np1 inet manual

iface eno8403 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eno8303 eno8403
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
mtu 9000
post-up ifconfig eno8303 mtu 9000 && ifconfig eno8403 mtu 9000 && ifconfig bond0 mtu 9000

#bond di collegamento

auto vmbr0
iface vmbr0 inet static
bridge-ports bond0
bridge-stp off
bridge-fd 0
#mtu 9000
#post-up ifconfig eno8303 mtu 9000 && ifconfig eno8403 mtu 9000 && ifconfig bond0 mtu 9000 && ifconfig vmbr0 mtu 9000


enabling the mtu 9000 and/or the post-up ifconfig command oon vmbr0 iface the Web gui stop responding; disabling it returns to work correctly

# my /etc/hosts file localhost hostname.domain.tld hostname
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

cat /etc/debian_version

pveversion -v

proxmox-ve: 7.1-1 (running kernel: 5.13.19-5-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-12
pve-kernel-5.13: 7.1-8
pve-kernel-5.13.19-5-pve: 5.13.19-13
pve-kernel-5.13.19-4-pve: 5.13.19-9
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-3
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-7
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1
Mar 10, 2022
Florence Italy
also enabling in /etc/default/pveproxy LISTEN_IP= or disabling IPV6 doesn't has any effect after restarting "systemctl restart pveproxy.service spiceproxy.service" .
Disabling any MTU settings from vmbr0 it works correctly.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!