PMTUD or large MTU size

stefws

Member
Jan 29, 2015
302
4
18
Denmark
siimnet.dk
running our PVE HN attached to two Cisco nexus 5672 leaf switches, configured to support MTU 9000. So our Hypervisor Nodes all allow MTU 9000 on their physical NICs for iSCSI traffic eta, most our of VMs also allow MTU 9000 on their vNICs.

Two CentOS 6 VMs are used as a HAproxy load balancing cluster, when some remote peers are connecting to this, it seem they want to use large MTU, which of course doesn't work across multiple routers/hops. So such connection are ended with a time out @our end.

We tried to lower the mtu to 1500 on the load balancer VMs NICs, but same story.
Wondering what's the reason peers in the first place are attempting to use large MTU and how to avoid this?

Any hints appreciated, TIA!
 
could it be that path MTU discovery is not working in your setup ? It can be the case if for ICMP is blocked in your network.
With Gigabit Ethernet and 10GB coming, I am not sure if it still makes sense to use a large MTU those days. You get a lot of configuration headaches and the throughput gain is at maximal 2-3 %.
 
Hm think not, partly because I'm only seen issues for some incoming TCP cnx attempts as they get to DATA in a smtp dialog then flow stops. Believe that MSS value should be calculated from NICs MTU during TCP SYN/ACK phase hense attempt to lower VM' NIC MTUs. But not a network expert and my net admins haven't got a clue... :/

You might be right about perf. vs issue w/large MTU as we're on 2x10Gbs, but still, want iSCSI eta to be as slick as possible :)

Thanks for you reply!
 
Most of my clients run decent FC and only a few are running with 10 GBE iSCSI (they know why ... I never saw an iSCSI-SAN beating a FC-SAN) but all of them have terrible performance with 1500 MTU and much better performance (+25%-50%) with 9000 MTU benchmarked with fio. The tricky part is to have everything at MTU 9000.
 
Thanks, also our initial reason to do everything internally in our network mtu 9000, and everything is running mtu 9000 fine. Only it seems to hinder some remote peers, properly also with larger mtu, to talking to our IP load balancers.

Currently using mtu 1500 on load balancer public NICs and hoping this will work better...
 
Just curious and to understand you correctly: you're running a load balancer in between your initiator and target?
Nope not for iSCSI traffic, iSCSI is used by hypervisor nodes as a shared SAN for VM storage to allow live migrations.
IP load balancer is run in a VM to balance traffic from remote peers across other service VMs.

iSCSI is just the main reason to use large MTU on our internal networks.
 
A couple of advices:
1. Use separate networks interfaces for iSCSI and VM traffic, put them in different vlans on switch for example.
2. Set switch ports mtu 1500 for vm and 9000 for iSCSI if possible.
 
Thanks, we know!
It's purely a cost based decision not to separate networks physically and also our iSCSI isn't heavy loaded, we're using different vlans of course :) We can do traffic shaping among vlans in switches if desired.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!