Hi everyone! This is my first post here so bear with me. I hope I don't break too many rules but I didn't find any guidelines.
I've been looking for a solution to enable bridge-vlan-aware on Mellanox ConnectX-4 LX (MCX4121A-ACAT, firmware 14.32.1010) on Proxmox 8.0.1 (6.2.16-4-pve) with the inbox driver. With bridge-vlan-aware enabled I didn't receive any network traffic. Didn't test the Mellanox OFED driver but this kernel isn't supported yet by it anyway. With vlan-aware enabled I could see traffic being sent out correctly and reaching my switch but traffic didn't get back unless the interface was set to promiscuous mode, leading me to look into configuration of the Mellanox card if any of the many many options caused traffic to be filtered. I tried toggling some possibly relevant options available via ethtool and via mlxconfig with no success.
Reading the mlx5 driver documentation I came across bridge offloading that mentioned eswitch and switchdev mode. https://docs.kernel.org/next/networking/device_drivers/ethernet/mellanox/mlx5.html#bridge-offload
Checking my system I saw that eswitch on my Mellanox card was set to legacy mode, and I tested to set it to switchdev, and traffic started to flow.
I don't know why this is, this is too advanced for me.
Looking to enable this at boot time I came across this Intel document (I don't use SR-IOV VFs though so that part isn't relevant)
https://edc.intel.com/content/www/u...itchdev-mode-with-linux-bridge-configuration/
They enabled it first after the bridge was created and the physical ports was added, which I guess corresponds to after the bridge interface has been brought up. I tried doing it earlier than this with no success.
I currently have things working at boot with this config in /etc/network/interfaces. Verified on a second host with an identical card that I hadn't messed around with during testing so I don't think I've changed anything else.
I needed vlan-aware since in my homelab I wanted to try running Gluster/CTDB/Samba directly on the hosts and my VM's needed to be able to communicate with the CTDB public ips on the same vlan. Without bridge-vlan-aware enabled I couldn't have any VM with a nic attached to the same vlan (vmbr0 vlan 20) that CTDB/Samba on the host used, it broke the networking.
I hope this will help someone else stuck in my situation, or perhaps having a better fix for this problem than a post-up command.
I've been looking for a solution to enable bridge-vlan-aware on Mellanox ConnectX-4 LX (MCX4121A-ACAT, firmware 14.32.1010) on Proxmox 8.0.1 (6.2.16-4-pve) with the inbox driver. With bridge-vlan-aware enabled I didn't receive any network traffic. Didn't test the Mellanox OFED driver but this kernel isn't supported yet by it anyway. With vlan-aware enabled I could see traffic being sent out correctly and reaching my switch but traffic didn't get back unless the interface was set to promiscuous mode, leading me to look into configuration of the Mellanox card if any of the many many options caused traffic to be filtered. I tried toggling some possibly relevant options available via ethtool and via mlxconfig with no success.
Reading the mlx5 driver documentation I came across bridge offloading that mentioned eswitch and switchdev mode. https://docs.kernel.org/next/networking/device_drivers/ethernet/mellanox/mlx5.html#bridge-offload
Checking my system I saw that eswitch on my Mellanox card was set to legacy mode, and I tested to set it to switchdev, and traffic started to flow.
Code:
# devlink dev eswitch show pci/0000:01:00.0
pci/0000:01:00.0: mode legacy inline-mode none encap-mode basic
# devlink dev eswitch set pci/0000:01:00.0 mode switchdev
# devlink dev eswitch show pci/0000:01:00.0
pci/0000:01:00.0: mode switchdev inline-mode link encap-mode basic
I don't know why this is, this is too advanced for me.
Looking to enable this at boot time I came across this Intel document (I don't use SR-IOV VFs though so that part isn't relevant)
https://edc.intel.com/content/www/u...itchdev-mode-with-linux-bridge-configuration/
They enabled it first after the bridge was created and the physical ports was added, which I guess corresponds to after the bridge interface has been brought up. I tried doing it earlier than this with no success.
I currently have things working at boot with this config in /etc/network/interfaces. Verified on a second host with an identical card that I hadn't messed around with during testing so I don't think I've changed anything else.
Code:
auto vmbr0
iface vmbr0 inet manual
bridge-ports enp1s0f0np0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 10-520
# 0000:01:00.0 corresponds to enp1s0f0np0
post-up devlink dev eswitch set pci/0000:01:00.0 mode switchdev
auto vlan20
iface vlan20 inet static
address 192.168.20.3/24
vlan-raw-device vmbr0
I needed vlan-aware since in my homelab I wanted to try running Gluster/CTDB/Samba directly on the hosts and my VM's needed to be able to communicate with the CTDB public ips on the same vlan. Without bridge-vlan-aware enabled I couldn't have any VM with a nic attached to the same vlan (vmbr0 vlan 20) that CTDB/Samba on the host used, it broke the networking.
I hope this will help someone else stuck in my situation, or perhaps having a better fix for this problem than a post-up command.