Bonded ethernet uplinks broken in pve-common 9.1.1

Dec 16, 2025
2
0
1
A recent commit to pve-common broke certain configurations that attach VLAN-naive bonded ethernet uplinks to vmbr devices.

Breaking commit: https://git.proxmox.com/?p=pve-common.git;a=commit;h=057f62f73048bc1e73e45e9edf6e197f84de630a

Prior to 9.1.1, PVE::Network::activate_bridge_vlan codified that bondN devices passed as a "physical device" for the purpose of checking for a physical uplink attached to a brdige.

Since 9.1.1, this now relies on PVE::IPRoute2::ip_link_is_physical which will return false for bond interfaces.

This makes it impossible to start VMs on clusters with this configuration post-upgrade.
 
yes when we use qm start <VMID> we got no physical interface on bridge 'vmbrN' and it is not possible to start VMs now
when using bond as physical interface for vmbr
 
Submitted bug #7153 since there's no workaround short of self-patching IPRoute2.pm and new/upgraded clusters with this config are badly impaired.
well i got myself a simple patch but its not well tested so im not gonna post it out
at least now i can start those vms
 
I deployed a new cluster ( 2 nodes for now, qdevice soon)

# Package version
Code:
proxmox-ve: 9.1.0 (running kernel: 6.17.4-2-pve)
pve-manager: 9.1.4 (running version: 9.1.4/5ac30304265fbd8e)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.4-2-pve-signed: 6.17.4-2
proxmox-kernel-6.17: 6.17.4-2
proxmox-kernel-6.17.2-1-pve-signed: 6.17.2-1
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx11
intel-microcode: 3.20251111.1~deb13u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.5
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.4
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.4
libpve-rs-perl: 0.11.4
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-3
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.1-1
proxmox-backup-file-restore: 4.1.1-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.5
pve-cluster: 9.0.7
pve-container: 6.0.18
pve-docs: 9.1.2
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.1.0
pve-i18n: 3.6.6
pve-qemu-kvm: 10.1.2-5
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.3
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1

/etc/network/interface

Code:
auto lo
iface lo inet loopback

iface nic0 inet manual

auto nic1
iface nic1 inet static
        address XXX
        mtu 1500

auto nic2
iface nic2 inet static
        address XXX
        mtu 1500

iface nic3 inet manual

auto nic4
iface nic4 inet manual
        mtu 9000

auto nic5
iface nic5 inet manual
        mtu 9000

auto nic6
iface nic6 inet manual
        mtu 9000

auto nic7
iface nic7 inet manual
        mtu 9000

auto bond0
iface bond0 inet manual
        bond-slaves nic4 nic7
        bond-miimon 50
        bond-mode 802.3ad
        bond-xmit-hash-policy layer3+4
        mtu 9000
        bond-lacp-rate fast
        bond-min-links 1
        bond-downdelay 200
        bond-updelay 200

auto bond0.21
iface bond0.21 inet static
        address XXX
        mtu 9000

auto bond1
iface bond1 inet manual
        bond-slaves nic5 nic6
        bond-miimon 50
        bond-mode 802.3ad
        bond-xmit-hash-policy layer3+4
        mtu 9000
        bond-lacp-rate fast
        bond-min-links 1
        bond-downdelay 200
        bond-updelay 200

auto bond1.200
iface bond1.200 inet static
        address XXX
        mtu 9000

auto vmbr0
iface vmbr0 inet static
        address XXX
        gateway XXX
        bridge-ports nic0
        bridge-stp off
        bridge-fd 0
        mtu 1500

auto vmbr30
iface vmbr30 inet manual
        bridge-ports vlan30
        bridge-stp off
        bridge-fd 0

auto vlan30
iface vlan30 inet manual
        vlan-raw-device bond0

source /etc/network/interfaces.d/*

sdn:
Code:
auto NONPROD
iface NONPROD
        bridge_ports ln_NONPROD
        bridge_stp off
        bridge_fd 0
        mtu 1500

auto bond0v30
iface bond0v30
        bridge_ports  pr_NONPROD
        bridge_stp off
        bridge_fd 0
        mtu 1500

auto ln_NONPROD
iface ln_NONPROD
        link-type veth
        veth-peer-name pr_NONPROD
        mtu 1500

auto pr_NONPROD
iface pr_NONPROD
        link-type veth
        veth-peer-name ln_NONPROD
        mtu 1500

comparing with sdn from another cluster ( didn't touch the sdn config lately)

It looks like the bond0v30 lacks the real bond0 in bridge _ports

when applying sdn configuration:
Code:
passed link that isn't a bridge to get_physical_bridge_ports at /usr/share/perl5/PVE/IPRoute2.pm line 81.
TASK OK

- vlan30 and vmbr30 are just workaround to be able to continue testing ( vm attached to this brigds communicates on bond vlan 30)

SOLVED: I missed a vmbr9999 bridge containing only bond0
 
Last edited: