running our PVE HN attached to two Cisco nexus 5672 leaf switches, configured to support MTU 9000. So our Hypervisor Nodes all allow MTU 9000 on their physical NICs for iSCSI traffic eta, most our of VMs also allow MTU 9000 on their vNICs.
Two CentOS 6 VMs are used as a HAproxy load balancing cluster, when some remote peers are connecting to this, it seem they want to use large MTU, which of course doesn't work across multiple routers/hops. So such connection are ended with a time out @our end.
We tried to lower the mtu to 1500 on the load balancer VMs NICs, but same story.
Wondering what's the reason peers in the first place are attempting to use large MTU and how to avoid this?
Any hints appreciated, TIA!
Two CentOS 6 VMs are used as a HAproxy load balancing cluster, when some remote peers are connecting to this, it seem they want to use large MTU, which of course doesn't work across multiple routers/hops. So such connection are ended with a time out @our end.
We tried to lower the mtu to 1500 on the load balancer VMs NICs, but same story.
Wondering what's the reason peers in the first place are attempting to use large MTU and how to avoid this?
Any hints appreciated, TIA!