kernel: vmbr0: received packet on bond0 with own address as source address

sky_me

Member
Dec 27, 2021
19
1
8
23
There are 7 hosts in my cluster environment, each host is bonded lacp Layer2+3, and each host has a vmbr0. When I randomly disconnect the network connection of a host, every An error that occurs every few seconds.
I tried to find a lot of solutions, but still got nothing. Including asking online classmates to cooperate with the investigation.

Sep 04 14:54:22 wg-node31 kernel: vmbr0: Received packet on bond0 with own address as source (addr:34:73:79:6a:41:f8, vlan:10)

PVEversion 7.4-3
 
This message mean than a network packet sent by your proxmox, is coming back to itself.

This shouldn't occur, until you have a wrong configuration somewhere.

Do you have configured lacp correctly on your physical switch ?
 
This message mean than a network packet sent by your proxmox, is coming back to itself.

This shouldn't occur, until you have a wrong configuration somewhere.

Do you have configured lacp correctly on your physical switch ?
Yes, I have confirmed that the physical machine exchange is configured with lacp, and I also asked my network colleagues to compare the configurations. so i'm very confused
 
你能发送一张关于你的网络在每台主机上如何流动的图表吗?
I don't quite understand the graph of network flow you mentioned, what exactly does it mean, brother, or can you give me an example?I can post my configuration

1693880423736.png
1693880461514.png
1693880504832.png
1693880536233.png
 
I am getting the same issue and also get No Route To Host errors between vms. Fresh install of Proxmox 8 and I am unable to configure load balancing (balance-rr) through the UI.

When I do a ip a, the vmbr0 and a nic have the same MAC Address whether a bond0 is configure or not. - Should the vmbr0 have a unique MAC address?


Code:
Sep 14 13:27:07 gtr7pro kernel: vmbr0: received packet on bond0 with own address as source address (addr:1a:b6:32:63:1e:32, vlan:0)

This is also causing No Route To Host trying to ping between a few VMs on the same Proxmox host and switch.

Code:
auto lo
iface lo inet loopback

auto enp4s0
iface enp4s0 inet manual

auto enp3s0
iface enp3s0 inet manual

auto vmbr0
iface vmbr0 inet static
    address 10.10.10.7/16
    gateway 10.10.0.1
    bridge-ports enp3s0
    bridge-stp off
    bridge-fd 0

iface wlp5s0 inet manual

I had to revert back to a single NIC configuration and clear the arp cache to fix the NO ROUTE FROM host...
Code:
ip -s -s neigh flush all
 
Last edited:
hello,
i have same problems in last month on v8 and 10 gbit network cards (intel ixgbe, i40e).
different clusters, clusters locations, net switches and physical networking cards.

  • vmbr0: received packet on bond0 with own address as source address
  • no route to host between few VMs (i40e + bond one adapter).
    router outside of proxmox has bad mac address learned, because of proxmox/cluster network outside communication

only solution was to set bond mode only to active-backup or balanced-rr. or disable bonding (one adapter).
no other modes works.

servers/nodes was updated and rebooted after 2 months, and bond stopped working correctly. no cfg, no hw, no netwrok change.
tried older kernel. check updated packages...


Code:
# pveversion -v
proxmox-ve: 8.0.2 (running kernel: 6.2.16-14-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
proxmox-kernel-6.2: 6.2.16-15
proxmox-kernel-6.2.16-14-pve: 6.2.16-14
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.26-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.5
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.9
libpve-guest-common-perl: 5.0.5
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.3-1
proxmox-backup-file-restore: 3.0.3-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.9
pve-cluster: 8.0.4
pve-container: 5.0.4
pve-docs: 8.0.5
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.8-2
pve-ha-manager: 4.0.2
pve-i18n: 3.0.7
pve-qemu-kvm: 8.0.2-6
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.13-pve1


network cards
  • first bond0 is on i40e , intel X710 metalic - PROBLEM
  • second bond0 is on i40e , intel X710 sfp - PROBLEM
  • third bond0 is on ixgbe, intel x520 - PROBLEM
  • bond on mellanox works ok (mode: 802.3ad) - OK


Code:
# first cluster
# lspci |grep -i net
44:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GBASE-T (rev 02)
44:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GBASE-T (rev 02)
44:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10 Gigabit SFP+ (rev 02)
44:00.3 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10 Gigabit SFP+ (rev 02)
81:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
81:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
c1:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
c1:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

# second cluster
# lspci |grep -i net
03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
03:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
03:00.2 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
03:00.3 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
0a:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller (rev 01)
0a:00.1 Ethernet controller: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller (rev 01)
27:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
27:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)




network config

Code:
auto bond0
iface bond0 inet manual
        # PROBLEM
        #bond-mode balance-tlb
        #bond-mode balance-alb

        #bond-slaves enp68s0f0
        #bond-slaves enp68s0f1

        #bond-slaves enp193s0f0 enp193s0f1
        #bond-slaves enp195s0f0 enp195s0f1 enp196s0f0 enp196s0f1

        #bond-slaves enp68s0f0 enp68s0f1

        # OK, 1 CARD
        bond-slaves enp193s0f0
        bond-mode active-backup
        #bond-mode balance-rr

        # PROBLEM
        #bond-mode 802.3ad
        #bond-lacp-rate fast
        #bond-xmit-hash-policy layer2+3

        bond-miimon 100
        mtu 9000
#wan0


auto vmbr0
iface vmbr0 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        mtu 9000
 
Last edited:
hello,
i have same problems in last month on v8 and 10 gbit network cards (intel ixgbe, i40e).
different clusters, clusters locations, net switches and physical networking cards.

  • vmbr0: received packet on bond0 with own address as source address
  • no route to host between few VMs (i40e + bond one adapter).
    router outside of proxmox has bad mac address learned, because of proxmox/cluster network outside communication

only solution was to set bond mode only to active-backup or balanced-rr. or disable bonding (one adapter).
no other modes works.

servers/nodes was updated and rebooted after 2 months, and bond stopped working correctly. no cfg, no hw, no netwrok change.
tried older kernel. check updated packages...


Code:
# pveversion -v
proxmox-ve: 8.0.2 (running kernel: 6.2.16-14-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
proxmox-kernel-6.2: 6.2.16-15
proxmox-kernel-6.2.16-14-pve: 6.2.16-14
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.26-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.5
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.9
libpve-guest-common-perl: 5.0.5
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.3-1
proxmox-backup-file-restore: 3.0.3-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.9
pve-cluster: 8.0.4
pve-container: 5.0.4
pve-docs: 8.0.5
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.8-2
pve-ha-manager: 4.0.2
pve-i18n: 3.0.7
pve-qemu-kvm: 8.0.2-6
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.13-pve1


network cards
  • first bond0 is on i40e , intel X710 metalic - PROBLEM
  • second bond0 is on i40e , intel X710 sfp - PROBLEM
  • third bond0 is on ixgbe, intel x520 - PROBLEM
  • bond on mellanox works ok (mode: 802.3ad) - OK


Code:
# first cluster
# lspci |grep -i net
44:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GBASE-T (rev 02)
44:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GBASE-T (rev 02)
44:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10 Gigabit SFP+ (rev 02)
44:00.3 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10 Gigabit SFP+ (rev 02)
81:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
81:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
c1:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
c1:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

# second cluster
# lspci |grep -i net
03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
03:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
03:00.2 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
03:00.3 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
0a:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller (rev 01)
0a:00.1 Ethernet controller: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller (rev 01)
27:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
27:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)




network config

Code:
auto bond0
iface bond0 inet manual
        # PROBLEM
        #bond-mode balance-tlb
        #bond-mode balance-alb

        #bond-slaves enp68s0f0
        #bond-slaves enp68s0f1

        #bond-slaves enp193s0f0 enp193s0f1
        #bond-slaves enp195s0f0 enp195s0f1 enp196s0f0 enp196s0f1

        #bond-slaves enp68s0f0 enp68s0f1

        # OK, 1 CARD
        bond-slaves enp193s0f0
        bond-mode active-backup
        #bond-mode balance-rr

        # PROBLEM
        #bond-mode 802.3ad
        #bond-lacp-rate fast
        #bond-xmit-hash-policy layer2+3

        bond-miimon 100
        mtu 9000
#wan0


auto vmbr0
iface vmbr0 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        mtu 9000
hi
In our environment, broadcast flooding occurs because vlan10 and untag belong to the same BD domain. The host itself will receive packet information sent by itself because bridge-vids 2-4094 in the interfaces contains vlan10. . One can be solved on the host machine, but consider not changing the basic configuration. In the end, after vlan10 is blocked on the network side, this problem will no longer exist.
Considering that each situation is different, it is recommended to work with a network engineer to conduct relevant packet capture analysis before making relevant configuration adjustments.
 
  • vmbr0: received packet on bond0 with own address as source address
  • no route to host between few VMs (i40e + bond one adapter).
    router outside of proxmox has bad mac address learned, because of proxmox/cluster network outside communication

downgrade of ifupdown2 package to ifupdown2_3.2.0-1+pmx4_all.deb, fixed problem:
"received packet on bond0 with own address as source address"

bonding modes works as before.

will check routing or older package versions later.

Changelog:
Code:
ifupdown2 (3.2.0-1+pmx5) bookworm; urgency=medium

  * fix new systemd behavior of assinging random MAC to bond by actively assining the one from the first slave-interface again

 -- Proxmox Support Team <support@proxmox.com>  Fri, 15 Sep 2023 16:20:25 +0200

downgrade:
Code:
dpkg -l |grep ifupdown2

wget http://download.proxmox.com/debian/pve/dists/bookworm/pve-no-subscription/binary-amd64/ifupdown2_3.2.0-1%2Bpmx4_all.deb
dpkg -i ifupdown2_3.2.0-1+pmx4_all.deb

apt-mark hold ifupdown2
 
Last edited:
Changelog:
Code:
ifupdown2 (3.2.0-1+pmx5) bookworm; urgency=medium

  * fix new systemd behavior of assinging random MAC to bond by actively assining the one from the first slave-interface again

 -- Proxmox Support Team <support@proxmox.com>  Fri, 15 Sep 2023 16:20:25 +0200
This change seems to somehow trigger a pretty old bug that I found on RedHat Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=487763
It has to do with the bridge not "knowing" which MAC adresses actually are local MAC adresses when using a bond interface in some cases ("balance-alb" in my case.)

Thanks for pointing out downgrading that package as a workaround!
I'll look for a way to report this as a bug in proxmox.

Perhaps the other workaround could be adding the MAC adresses of the secondary (third, etc.) bond member-interface as "local" to the bridge, but I haven't investigated that possibility.
 
  • Like
Reactions: ucholak
yes, in documentation, only these two modes are recommended:

If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using the corresponding bonding mode (802.3ad). Otherwise you should generally use the active-backup mode.

in your bug report (tnx for reporting), reply from proxmox was, also in same spirit:

balance-alb has worse behaviour w.r.t. fault tolerance/failover, and weird failure modes when mixed with bridging (it needs to intercept ARP traffic and rewrite MAC addresses to work!). hence it is not suitable for usage with VM traffic.

this describes problems with VMs and ARPs and other modes.


so i will try to configure switches for LACP, test connection, and ifupdown2 versions. should be then OK.
strange is that other modes worked OK till ~july 2023 :)
 
Last edited:
[...] and ifupdown2 versions. should be then OK.
strange is that other modes worked OK till ~july 2023 :)
Well, "balance-alb" needs to rewrite MAC-addresses (so that packets from secondary adapters look like they were actually coming from primary adapter) to work and they obviously defeated the cleverness behind it by just forcing a single MAC address onto the bond interface with recent releases of ifupdown2.

I guess they made the decision based on their support experience, where of course there had to be some problems with setups like this. Too bad for everyone where this setup did work, even though not recommended.

Oh well, then I'll only have 10 GBit of bandwidth for VM traffic on our proxmox servers - at least until I win the lottery and can afford additional nics/switches to "do it properly": 2 NICs in LACP for VM traffic, 2 separate NICs for Storage and possibly 2 NICs for Corosync. I haven't yet figured out whether online migration of a VM will use Corosync interfaces or whether those could be 1GbE...
 
Last edited:
Well, "balance-alb" needs to rewrite MAC-addresses (so that packets from secondary adapters look like they were actually coming from primary adapter) to work and they obviously defeated the cleverness behind it by just forcing a single MAC address onto the bond interface with recent releases of ifupdown2.

I guess they made the decision based on their support experience, where of course there had to be some problems with setups like this. Too bad for everyone where this setup did work, even though not recommended.
Hi,
I have made this ifupdown2 patch, because ifupdown2 was already changing mac address if you reloaded the network configuration.
and systemd at boot, is setting mac address randomly too.
At least with lacp bond.


Could you tell me what are the mac address of bond balance-alb interface && physical interface, with previous version ?
#ip addr

and same after an "ifreload -a "
?
 
  • Like
Reactions: ucholak
Could you tell me what are the mac address of bond balance-alb interface && physical interface, with previous version ?
#ip addr

and same after an "ifreload -a "
?
Code:
root@mox01:~# uptime
 10:48:43 up  1:34,  1 user,  load average: 0.00, 0.00, 0.00

root@mox01:~# apt list --installed ifupdown2
Listing... Done
ifupdown2/stable,now 3.2.0-1+pmx4 all [installed,upgradable to: 3.2.0-1+pmx5]

root@mox01:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v6.2.16-19-pve

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eno1np0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eno1np0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 98:03:9b:b0:af:8e
Slave queue ID: 0

Slave Interface: ens4f0np0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: b8:59:9f:05:a7:a8
Slave queue ID: 0

root@mox01:~# ip link
[...]
2: eno1np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether e6:17:57:7f:a1:9d brd ff:ff:ff:ff:ff:ff permaddr 98:03:9b:b0:af:8e
[...]
4: ens4f0np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether b8:59:9f:05:a7:a8 brd ff:ff:ff:ff:ff:ff
[...]
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether e6:17:57:7f:a1:9d brd ff:ff:ff:ff:ff:ff
9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e6:17:57:7f:a1:9d brd ff:ff:ff:ff:ff:ff
[...]
root@mox01:~# ifreload -a

root@mox01:~# ip link
[...]
2: eno1np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 98:03:9b:b0:af:8e brd ff:ff:ff:ff:ff:ff
[...]
4: ens4f0np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether b8:59:9f:05:a7:a8 brd ff:ff:ff:ff:ff:ff
[...]
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 98:03:9b:b0:af:8e brd ff:ff:ff:ff:ff:ff
9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 98:03:9b:b0:af:8e brd ff:ff:ff:ff:ff:ff
[...]

Here you are.

I rebooted the server recently, and with the previous ifupdown2 version everything works fine.
 
Code:
root@mox01:~# uptime
 10:48:43 up  1:34,  1 user,  load average: 0.00, 0.00, 0.00

root@mox01:~# apt list --installed ifupdown2
Listing... Done
ifupdown2/stable,now 3.2.0-1+pmx4 all [installed,upgradable to: 3.2.0-1+pmx5]

root@mox01:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v6.2.16-19-pve

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eno1np0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eno1np0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 98:03:9b:b0:af:8e
Slave queue ID: 0

Slave Interface: ens4f0np0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: b8:59:9f:05:a7:a8
Slave queue ID: 0

root@mox01:~# ip link
[...]
2: eno1np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether e6:17:57:7f:a1:9d brd ff:ff:ff:ff:ff:ff permaddr 98:03:9b:b0:af:8e
[...]
4: ens4f0np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether b8:59:9f:05:a7:a8 brd ff:ff:ff:ff:ff:ff
[...]
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether e6:17:57:7f:a1:9d brd ff:ff:ff:ff:ff:ff
9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e6:17:57:7f:a1:9d brd ff:ff:ff:ff:ff:ff
[...]
root@mox01:~# ifreload -a

root@mox01:~# ip link
[...]
2: eno1np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 98:03:9b:b0:af:8e brd ff:ff:ff:ff:ff:ff
[...]
4: ens4f0np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether b8:59:9f:05:a7:a8 brd ff:ff:ff:ff:ff:ff
[...]
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 98:03:9b:b0:af:8e brd ff:ff:ff:ff:ff:ff
9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 98:03:9b:b0:af:8e brd ff:ff:ff:ff:ff:ff
[...]

Here you are.

I rebooted the server recently, and with the previous ifupdown2 version everything works fine.

so,

before the reload :

bond0 mac = vmbr0 mac = eno1np0 mac : e6:17:57:7f:a1:9d (unknown vendor, I suppose random mac generated by systemd at boot)
ens4f0np0 mac : b8:59:9f:05:a7:a8 (mellanox vendor)

after the reload:

bond0 mac = vmbr0 mac = eno1np0 mac : 98:03:9b:b0:af:8e (mellanox vendor)
ens4f0np0 mac : b8:59:9f:05:a7:a8 (mellanox vendor)


So, you should have errors too after the reload right ?

because with patched ifupdown2, you should have exactly the same at boot, than after reload with this version.
(as you see, on reload, ifupdown2 is setting real mac of first interface)
 
I guess they made the decision based on their support experience, where of course there had to be some problems with setups like this. Too bad for everyone where this setup did work, even though not recommended.

looks like :(

Oh well, then I'll only have 10 GBit of bandwidth for VM traffic on our proxmox servers - at least until I win the lottery and can afford additional nics/switches to "do it properly": 2 NICs in LACP for VM traffic, 2 separate NICs for Storage and possibly 2 NICs for Corosync. I haven't yet figured out whether online migration of a VM will use Corosync interfaces or whether those could be 1GbE...

i think you shouldn't act on that note in bug from reply:
note that mixing guest, storage and corosync traffic (even if logically separated via VLANs) is far from best practice as well.

separation of networks is perfectly best practice in all environments by VLAN. it is very strange request to separate by whole NICs.

i am NOT separating by NICs.
i separate communication by VLANs. example of my config:

Code:
auto enp68s0f0
iface enp68s0f0 inet manual
        mtu 9000

auto enp68s0f1
iface enp68s0f1 inet manual
        mtu 9000


auto bond10
iface bond10 inet manual
        bond-slaves enp68s0f2 enp68s0f3
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3
        mtu 9000


auto vmbr10
iface vmbr10 inet manual
        bridge-ports bond10
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-124
        mtu 9000

       
# MNG COROSYNC
auto vmbr10.2
iface vmbr10.2 inet manual
        address 1.1.2.100/24
        gateway 1.1.2.1
        mtu 9000

# MNG STORAGE
auto vmbr10.3
iface vmbr10.3 inet manual
        address 1.1.3.100/24
        gateway 1.1.3.1
        mtu 9000

# VM group 10
auto vmbr10.10
iface vmbr10.10 inet manual
        address 1.1.10.100/24
        gateway 1.1.10.1
        mtu 9000
       
# VM group 20
auto vmbr10.20
iface vmbr10.20 inet manual
        address 1.1.20.100/24
        gateway 1.1.20.1
        mtu 9000

so it is stacked like this:
A) NIC -> BOND -> BR -> VLAN

i also spotted strange stacks in forums, from studding this problem:
B) NIC -> BOND -> VLAN - > BR


but it is worse in VM NIC usage.
when you connect NIC to VM, you can in A) specify VLAN, and in B, you cant specify VLAN on already VLAN BR.
 
Last edited:
looks like :(



i think you shouldn't act on that note in bug from reply:


separation of networks is perfectly best practice in all environments by VLAN. it is very strange request to separate by whole NICs.
It's really only best practice, if you are sure to not saturated your links, it's perfectly fine split by vlan from a security point of view.

But currently, they are no QOS on corosync for example. So if you saturated your links (vm traffic/storage traffic), and you have HA enabled, you'll have big big problem with node reboot, cluster split brain,etc...
 
Hi,

If possible, can you do a test:
changing in /usr/lib/systemd/network/99-default.link

MACAddressPolicy=persistant ----> MACAddressPolicy=none

Then test with both version of iupdown2.

and send result of /proc/net/bonding/bond0 && ip a
 
  • Like
Reactions: ucholak

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!