Hi Proxmox-Forum-Members,
we are trying to get our new proxmox servers network set up correctly, but failing...
What we have:
Proxmox VE 7.3-4
3x new Supermicro Hardware w. Xeon Silver 32C and lots of RAM
One Mellanox ConnectX3 40G QSFP+ card in each
What we need:
3 VLANs with static ips
What we already did:
bond0.1234 for Corosync (for our other 16 Hosts)
bond0.123 for Storage (another Ceph Clusters Frontend)
bond0.124 for Proxmox Administration
bond0.125 for Ceph Backend (another Ceph Clusters Backend)
also tried:
bond0.1234 for Corosync (for our other 16 Hosts)
vmbr0.123 for Storage (another Ceph Clusters Frontend)
vmbr0.124 for Proxmox Administration
vmbr0.125 for Ceph Backend (another Ceph Clusters Backend)
with the following settings in each configuration:
vmbr0 > bridge-vlan-aware no >>> "Everything working" until we create/migrate a VM in one of the VLANs mentioned above, which results in complete shutdown of that VLAN. No traffic possible. No network errors in syslog... weird, I know. Just the services that are not responding (ceph mon timeouts/banning f.ex.)
vmbr0 > bridge-vlan-aware yes & no bridge-vids >>> results in none of the VLANs coming up - makes sense
vmbr0 > bridge-vlan-aware yes & bridge-vids 2-4096 >>> results in adding all the VLANs until 127 on the ConnectX3 Adapter... when bond0.1234 is called after vmbr0, that one is lost... no usage of other VLAN above 127 is possible... - not working for us since we nee a lot of VLANs
vmbr0 > bridge-vlan-aware yes & bridge-vids specific vlan IDs >>> results in working conditions, BUT: if we are planning to use another VLAN id we would have to restart the vmbr0 all the time, a new configuration is set up. Downtime for all VMs...
Is there any way to get this solved without a problem?
we are trying to get our new proxmox servers network set up correctly, but failing...
What we have:
Proxmox VE 7.3-4
3x new Supermicro Hardware w. Xeon Silver 32C and lots of RAM
One Mellanox ConnectX3 40G QSFP+ card in each
What we need:
3 VLANs with static ips
What we already did:
bond0.1234 for Corosync (for our other 16 Hosts)
bond0.123 for Storage (another Ceph Clusters Frontend)
bond0.124 for Proxmox Administration
bond0.125 for Ceph Backend (another Ceph Clusters Backend)
also tried:
bond0.1234 for Corosync (for our other 16 Hosts)
vmbr0.123 for Storage (another Ceph Clusters Frontend)
vmbr0.124 for Proxmox Administration
vmbr0.125 for Ceph Backend (another Ceph Clusters Backend)
with the following settings in each configuration:
vmbr0 > bridge-vlan-aware no >>> "Everything working" until we create/migrate a VM in one of the VLANs mentioned above, which results in complete shutdown of that VLAN. No traffic possible. No network errors in syslog... weird, I know. Just the services that are not responding (ceph mon timeouts/banning f.ex.)
vmbr0 > bridge-vlan-aware yes & no bridge-vids >>> results in none of the VLANs coming up - makes sense
vmbr0 > bridge-vlan-aware yes & bridge-vids 2-4096 >>> results in adding all the VLANs until 127 on the ConnectX3 Adapter... when bond0.1234 is called after vmbr0, that one is lost... no usage of other VLAN above 127 is possible... - not working for us since we nee a lot of VLANs
vmbr0 > bridge-vlan-aware yes & bridge-vids specific vlan IDs >>> results in working conditions, BUT: if we are planning to use another VLAN id we would have to restart the vmbr0 all the time, a new configuration is set up. Downtime for all VMs...
Is there any way to get this solved without a problem?