Hello everyone,
I have 2 cluster with almost identical Dell R730xd, 1 of them equipped with a Qlogic mezzanine with BCM57840 managing 4 x SFP+ ports, and one cluster with a similar board with 2 x 1 Gbps copper and 2 x 10 Gbps copper.
In all cluster I have similar config where 2 ports are bonded in LACP 802.3ad, on vmbr0 and the cluster have a VLAN for management purposes on the management network where also there are other VMs and CTs. The other 2 are for corosync.
I removed 2 servers from a cluster to create a 4th smaller cluster, cleaned them and reinstalled with PVE 9.0.11.
Problem: when I create a VM/CT with the same vlan tag as the host on vmbr0, that vlan stop working on the host and works on the VM/CT. If I log in to the host and sysctl restart networking it stops working on the VM/CT and restore on the host.
On cluster 1 with the same hardware everything is working fine on PVE8.
Cluster 2 is on 9.0.11 installed from scratch but with sightly different NICs, and is working as well.
Already tried:
- Every network configuration (removing LACP, using single port, using different bridge / vlan, etc etc...)
- Changing switch (the problem is on the host itself tough)
- Updating the NIC firmware.
- Reinstalling everything from scratch
- Searching for similar problems, nothing relevant found.
- Creating the same config on a different server on same site, with same DACs and switch port, and it works.
I'll try:
- Changing NIC, but I'd like to understand what's the problem. Since those 2 srv were not removed from the cluster everything was working.

I have 2 cluster with almost identical Dell R730xd, 1 of them equipped with a Qlogic mezzanine with BCM57840 managing 4 x SFP+ ports, and one cluster with a similar board with 2 x 1 Gbps copper and 2 x 10 Gbps copper.
In all cluster I have similar config where 2 ports are bonded in LACP 802.3ad, on vmbr0 and the cluster have a VLAN for management purposes on the management network where also there are other VMs and CTs. The other 2 are for corosync.
I removed 2 servers from a cluster to create a 4th smaller cluster, cleaned them and reinstalled with PVE 9.0.11.
Problem: when I create a VM/CT with the same vlan tag as the host on vmbr0, that vlan stop working on the host and works on the VM/CT. If I log in to the host and sysctl restart networking it stops working on the VM/CT and restore on the host.
On cluster 1 with the same hardware everything is working fine on PVE8.
Cluster 2 is on 9.0.11 installed from scratch but with sightly different NICs, and is working as well.
Already tried:
- Every network configuration (removing LACP, using single port, using different bridge / vlan, etc etc...)
- Changing switch (the problem is on the host itself tough)
- Updating the NIC firmware.
- Reinstalling everything from scratch
- Searching for similar problems, nothing relevant found.
- Creating the same config on a different server on same site, with same DACs and switch port, and it works.
I'll try:
- Changing NIC, but I'd like to understand what's the problem. Since those 2 srv were not removed from the cluster everything was working.

Last edited: