[SOLVED] So is OpenVSwitch bonding just broken on PVE 6? What's going on?

>>Do I need to add auto enp132s0 and auto enp132s0d1 to both the Linux Bridge config AND the OVS config in order to get that MTU to properly stick?

IT's better to add it.
if you don't have "auto enpX", the "mtu xxx" value in this inferface is not applied.
But mtu on bond0 is applied, and could go to enpX (but I'm not sure 100%, depend with bridge/ovs ifupdown1/ifupdow2).

If you have to be sure, add "auto enpx", it'll work 100%.

I have sent a patch today to pve-devel mailing list to add "auto xxx" to interfaces slaves of a of bond.
 
Ok thank spirit, appreciate that.

As for Linux bond bridge vs OVS - on 10G networks is there any difference in performance and speed? I got the Linux bond + bridge working right now and I'm thinking of keeping this running, as I like that I can reload the configuration from the GUI and also with ifreload -a. Is there any performance difference whatsoever between the two?
 
Ok thank spirit, appreciate that.

As for Linux bond bridge vs OVS - on 10G networks is there any difference in performance and speed? I got the Linux bond + bridge working right now and I'm thinking of keeping this running, as I like that I can reload the configuration from the GUI and also with ifreload -a. Is there any performance difference whatsoever between the two?

They are no performance benefit with ovs until you use dpdk (but it's not supported by proxmox anyway).
If you have a working linux bridge setup, just keep it. You'll less problem.
(and reload with ifupdown2 is better implemented than ovs)