https://forum.proxmox.com/threads/proxmox-bridge-mtu-issue.51148/#post-293884
On that threat you will find my post and configs, there you will notice that I'm using QinQ.
My topology there is like:
For QinQ MTU is critical review that.
My solution is a 3 PVE nodes in a Cluster and using QinQ simplify the connection between VMs and services. Ones a vlan is created on the physical switches (S-Tag) that vlan became a transport for PVE vSwitch with all the vlans (C-Tags) piggy back ride.
In realty there are two Nexus in vPC toward each PVE host for redundancy. Also two MXs... to avoid SPOF
This link will give you and idea:
https://www.packetflow.co.uk/what-is-cisco-vpc-virtual-port-channel/
I didn't fully read the thread but hope that give you some guide.
On that threat you will find my post and configs, there you will notice that I'm using QinQ.
My topology there is like:
Code:
vFirewall (OpnSense) <> vmbrXX <> vmbrY <> CiscoNexus 3548X <> Juniper MX
| | | |
V | V V
vTrunk | LACP Bond LACP Bond
| (Trunk) (Trunk)
V
vmbrXX (slave Port is vmbrY.xx
No need to declar it on vmbrY)
For QinQ MTU is critical review that.
My solution is a 3 PVE nodes in a Cluster and using QinQ simplify the connection between VMs and services. Ones a vlan is created on the physical switches (S-Tag) that vlan became a transport for PVE vSwitch with all the vlans (C-Tags) piggy back ride.
In realty there are two Nexus in vPC toward each PVE host for redundancy. Also two MXs... to avoid SPOF
This link will give you and idea:
https://www.packetflow.co.uk/what-is-cisco-vpc-virtual-port-channel/
I didn't fully read the thread but hope that give you some guide.
Last edited: