[SOLVED] VMs VLAN kills vmbr0.vlanid without VLAN-aware of bridge - ConnectX3

studiofunk

Member
Jun 5, 2020
11
1
8
43
Hi Proxmox-Forum-Members,

we are trying to get our new proxmox servers network set up correctly, but failing...

What we have:
Proxmox VE 7.3-4
3x new Supermicro Hardware w. Xeon Silver 32C and lots of RAM
One Mellanox ConnectX3 40G QSFP+ card in each

What we need:

3 VLANs with static ips

What we already did:

bond0.1234 for Corosync (for our other 16 Hosts)
bond0.123 for Storage (another Ceph Clusters Frontend)
bond0.124 for Proxmox Administration
bond0.125 for Ceph Backend (another Ceph Clusters Backend)

also tried:

bond0.1234 for Corosync (for our other 16 Hosts)
vmbr0.123 for Storage (another Ceph Clusters Frontend)
vmbr0.124 for Proxmox Administration
vmbr0.125 for Ceph Backend (another Ceph Clusters Backend)

with the following settings in each configuration:

vmbr0 > bridge-vlan-aware no >>> "Everything working" until we create/migrate a VM in one of the VLANs mentioned above, which results in complete shutdown of that VLAN. No traffic possible. No network errors in syslog... weird, I know. Just the services that are not responding (ceph mon timeouts/banning f.ex.)

vmbr0 > bridge-vlan-aware yes & no bridge-vids >>> results in none of the VLANs coming up - makes sense ;)

vmbr0 > bridge-vlan-aware yes & bridge-vids 2-4096 >>> results in adding all the VLANs until 127 on the ConnectX3 Adapter... when bond0.1234 is called after vmbr0, that one is lost... no usage of other VLAN above 127 is possible... - not working for us since we nee a lot of VLANs

vmbr0 > bridge-vlan-aware yes & bridge-vids specific vlan IDs >>> results in working conditions, BUT: if we are planning to use another VLAN id we would have to restart the vmbr0 all the time, a new configuration is set up. Downtime for all VMs...

Is there any way to get this solved without a problem?
 
Can you post the network configuration of your previous attempts, or at least your current network configuration? You can find it located in /etc/network/interfaces. Please make sure to post it wrapped in CODE-Tags, so it is easier to read.
 
This is, how it is now on one of the hosts:

Code:
auto lo
iface lo inet loopback

# ConnectX3 interfaces
auto ens85
iface ens85 inet manual
        bond-primary ens85 ens85d1
        mtu 9000
        bond-master bond0

auto ens85d1
iface ens85d1 inet manual
        bond-primary ens85 ens85d1
        mtu 9000
        bond-master bond0

iface usb0 inet manual

iface eno1 inet manual

iface eno2 inet manual

# LACP Bond
auto bond0
iface bond0 inet manual
        bond-slaves ens85 ens85d1
        bond-miimon 100
        bond-mode 802.3ad
        mtu 9000
        bond-lacp-rate 1

# Corosync VLAN Interface
auto bond0.3004
iface bond0.3004 inet static
        address 192.168.234.53/24
        mtu 1500

# Proxmox Bridge Interface
auto vmbr0
iface vmbr0 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        mtu 9000
        bridge-vlan-aware yes
        bridge-vids 2-4094

#Proxmox Administration
auto vmbr0.102
iface vmbr0.102 inet static
        address 10.1.101.163/23
        gateway 10.1.100.1
        mtu 1500
        post-up ip route add default via 10.1.100.1

#Ceph Storage Frontend
auto vmbr0.101
iface vmbr0.101 inet static
        address 10.2.8.43/21
        mtu 9000

#Ceph Storage Backend
auto vmbr0.107
iface vmbr0.107 inet static
        address 10.2.24.23/24
        mtu 9000


This is what I am testing right now on another host (copied the "hierarchy" from the network configuration that Proxmox itself uses)

Code:
auto lo
iface lo inet loopback

# ConnectX3 interfaces
auto ens85
iface ens85 inet manual
        bond-primary ens85 ens85d1
        mtu 9000
        bond-master bond0

auto ens85d1
iface ens85d1 inet manual
        bond-primary ens85 ens85d1
        mtu 9000
        bond-master bond0

iface eno1 inet manual

iface eno2 inet manual

iface usb0 inet manual

# LACP Bond
auto bond0
iface bond0 inet manual
        bond-slaves ens85 ens85d1
        bond-miimon 100
        bond-mode 802.3ad
        mtu 9000
        bond-lacp-rate 1

# VLAN 101
auto bond0.101
iface bond0.101 inet manual
        mtu 9000

# VLAN 3004 Corosync Interface
auto bond0.3004
iface bond0.3004 inet static
        address 192.168.234.51/24
        mtu 1500

# Proxmox Administration
auto bond0.102
iface bond0.102 inet static
        address 10.1.101.161/23
        gateway 10.1.100.1
        post-up ip route add default via 10.1.100.1
        mtu 1500

#Ceph Storage Backend
auto bond0.107
iface bond0.107 inet static
        address 10.2.24.21/24
        mtu 9000

# Proxmox VM Interface
auto vmbr0
iface vmbr0 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware no
#       bridge-vids 3 100 101 102 107
        mtu 9000

# Ceph Storage Frontend (vor VM use also)
auto vmbr1
iface vmbr1 inet static
        bridge-ports bond0.101
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware no
        address 10.2.8.41/21
        mtu 9000

But with this I get the error - of course:
interface bond0.101 already exist in bridge vmbr1
kvm: -netdev type=tap,id=net0,ifname=tap124i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on: network script /var/lib/qemu-server/pve-bridge failed with status 65280
TASK ERROR: start failed: QEMU exited with code 1
 
This is how it works for us now... if someone is interested in working network configuration.
The VLAN IDs are not reserved by bridge-vlan-aware and added if needed.
Also, we have added a few VLANs which are reserved for the host also and are not "destroyed" by any VM which is trying to run a VM in that VLAN. If one of the VLANs of the Bridges not called "vmbrx" is used, Proxmox will throw an error, that the interface bond.vlanid already is in use by another interface. Nice "feature" ;)!

Code:
auto lo
iface lo inet loopback

auto ens85
iface ens85 inet manual
        bond-primary ens85 ens85d1
        mtu 9000
        bond-master bond0

auto ens85d1
iface ens85d1 inet manual
        bond-primary ens85 ens85d1
        mtu 9000
        bond-master bond0

iface usb0 inet manual

iface eno1 inet manual

iface eno2 inet manual

# LACP Bond
auto bond0
iface bond0 inet manual
        bond-slaves ens85 ens85d1
        bond-miimon 100
        bond-mode 802.3ad
        mtu 9000
        bond-lacp-rate 1

# VLAN 101 - Ceph Storage Frontend
auto bond0.101
iface bond0.101 inet manual
        mtu 9000

# VLAN 101 - Proxmox Bridges for use in VMs
auto vmbr101
iface vmbr101 inet static
        bridge-ports bond0.101
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware no
        address 10.2.8.43/21
        mtu 9000

# VLAN 107 - Ceph Backend (Host only)
auto bond0.107
iface bond0.107 inet manual
        mtu 9000

# VLAN 107 - Bridge (no Proxmox VM access!)
auto vlan107
iface vlan107 inet static
        bridge-ports bond0.107
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware no
        address 10.2.24.23/24
        mtu 9000

# VLAN 3004 - Corosync (Host only)
auto bond0.3004
iface bond0.3004 inet static
        address 192.168.234.53/24
        mtu 1500

# Proxmox Bridge Interfaces
auto vmbr0
iface vmbr0 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        mtu 9000
        bridge-vlan-aware no

# VLAN 102 - Proxmox Administration (Host only - no Proxmox VM access!)
auto bond0.102
iface bond0.102 inet manual

auto vlan102
iface vlan102 inet static
        address 10.1.101.163/23
        gateway 10.1.100.1
        post-up ip route add default via 10.1.100.1
        mtu 1500
        bridge-ports bond0.102
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware no
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!