kernel: vmbr0: received packet on bond0 with own address as source address

Hi,

If possible, can you do a test:
changing in /usr/lib/systemd/network/99-default.link

MACAddressPolicy=persistant ----> MACAddressPolicy=none

Then test with both version of iupdown2.

and send result of /proc/net/bonding/bond0 && ip a

@spirit tnx for hint.
i didn't know about these linux systemd "registries" and changes. now i more understand possible root of problems #systemd :)

sorry, but i cannot try all your suggestions right away. there are many test states, and only one possibility to try to test networking on production only once in a while. and nevertheless i need quick revert to working state.

now if i try with full updated proxmox:
pve 8.1 + kernel 6.5.11 + ifupdown2 3.2.0-1+pmx7 + 802.3ad/active-backup

everything is working after boot.
specially also 802.3ad on i40e adapter, that was not working for months.
i am afraid to run ifreload, only tried ifdown bond0 && ifup bond0 with 2 OK configurations (802.3ad@i40e , active-backup@ixgbe).

maybe i will look more into it, in future on new cluster setup, and try all possibilities.
but now if old config with little adjustments (802.3ad) is working, i am very happy.

tnx much for help



Code:
# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.11-6-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5: 6.5.11-6
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
proxmox-kernel-6.5.11-5-pve-signed: 6.5.11-5
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2: 6.2.16-19
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.3
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.4
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve4
 
Last edited:
this is great news, it was really strange problem. i find your fix here, with great comment explanation:

"inheriting MAC from the first slave, instead of using a random one, avoids that locked down network environments (e.g., at most hosting providers)
will block traffic due to a unexpected MAC in the outgoing network packets"


Code:
~# cat /usr/lib/systemd/network/99-default.link.d/proxmox-mac-address-policy.conf
[Match]
OriginalName=*

[Link]
# Fixes two issues for Proxmox VE systems:
# 1. inheriting MAC from the first slave, instead of using a random one, avoids
#    that locked down network environments (e.g., at most hosting providers)
#    will block traffic due to a unexpected MAC in the outgoing network packets
# 2. Avoids that systemd keeps bridge offline if there are no slaves connected,
#    failing, e.g., setting up s-NAT if no guest is (yet) started.
MACAddressPolicy=none


yes, exactly!
i was trying to work on problem with hosting provider, but they didn't see problem and didn't inform me about locked down network environment.

PS: ouch this systemd changes sucks. rewrite and integrate everything mentality. should use windows instead if i want that :)
 
If possible, can you do a test:
changing in /usr/lib/systemd/network/99-default.link

MACAddressPolicy=persistant ----> MACAddressPolicy=none

Then test with both version of iupdown2.

and send result of /proc/net/bonding/bond0 && ip a
I too have been banging my head all night about the
Code:
 vmbr1: received packet on enp4s0 with own address as source address (addr:XX, vlan:0)
error and the
Code:
MACAddressPolicy=none
seems to have cleared it up. Thank you.
 
  • Like
Reactions: spirit
@lindybalboa thanks for the report !
It seems I spoke too soon. The issue was in fact not solved, however I have since seem to have found the solution. Based on
this link I set the "ageing time" to 0 and it has resolved the issue. It furthermore solved another issue I was having about dropped packets between CTs/VMs and /some/ other physical devices on the LAN. Before the fix I could ping back and forth between CTs/VMs and my Windows laptop no problem, but trying to ping a RPi or my android phone had packet loss of >95%. Setting the ageing time to 0 I can ping everything with no drops. I have to be honest, this is way over my head as far as networking goes. Any idea what might be up?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!