MTU limited to 3030?

cyruspy

Renowned Member
Jul 2, 2013
73
2
73
Hello!,

I'm attempting to run a lab of VMware vSphere on top of my Proxmox lab, the only out of the box supported network interface seems to be VMXNET3, but running a basic ping test within the VM I can send at most 3030 in the payload.

Anybody has seen such limitation?. Meanwhile, I'll try to source E1000 drivers for ESXi and report back. Will E1000E be an option for Proxmox in the future?

I've tested:
OVSwitch with MTU = 9100

1- CentOS 7 with Virtio NIC + OVS bridge MTU=9000 --> "ping -s 9000" doesn't pass
2- CentOS 7 with Virtio NIC + OVS bridge MTU=9050 --> "ping -s 9000" passes!
3- CentOS 7 with VMXNET3 + OVS bridge MTU=9000 --> "ping -s 9000" doesn't pass
4- CentOS 7 with VMXNET3 + OVS bridge MTU=9050 --> "invalid MTU size", cannot set MTU
5- ESXi 7.0.1 with VMXNET3 + OVS bridge MTU=9000 --> "ping -s 3030" passes, nothing higher than that.
 
Last edited:
seem to be a known bug with vmxnet3

https://communities.vmware.com/t5/N...nning-ESX-under-KVM-with-VM-x-EPT/m-p/2668850

"
More misery to report.





I have tried every single QEMU network device with ESXI 6.5 with the following results:



e1000-82544gc -- Works with MTU 9000 but I can only get one to work per PCI bus. I'm currently using this as my solution with a vmxnet3 as my management interface. I'd like to have 5 more for tinkering.

e1000-82545em -- Recognized but can't get any packets to flow even with MTU 1500

e1000e -- Pings work with MTU 1500 but random "Rx receiver hangs" in vmkernel.log and TCP doesn't work (packetloss)

all remaining intel based vnics (i825xxx) -- not recognized

rtl8139 -- no jumbo frames

vmxnet3 --- works great but only supports MTU's of 3058 -- yes I tested all the way up to the exact byte. Ping -s 3030 works.. -s 3031 does not. This explains why vSan complains about MTU.



ATTENTION GOOD PEOPLE AT VMWARE: Could you PLEASE add a support for KVM virtio-scsi-pci and virtio-net-pci ?? This would enable people to build labs to learn without buying expensive equipment -- also may prevent divorce from the noise they make.



In my case, I was hoping to build a completely virtual VMware HCI lab on my workstation -- using megasas-gen2, which (in theory) could translate SCSI UNMAP (overlay) to SATA-TRIM (underlay) --- allowing me to use cheap-o SSD's as well.


Note: This is as of QEMU-2.8.0 and Linux 4.9.8
"
 
Thanks for providing feedback!. Pre-7.0 it seems "easily" porting linux drivers was an option. I'm eagerly investigating how to create a Virtio native driver for ESXi, but doesn't seem to be simple.

Even with VMXNET3 on Linux I cannot configure MTU>9000 to be able to have a 9000 bytes payload, not sure if it's a QEMU or CentOS driver limitation. So, even fixing the VMXNET3 thing on ESXi doesn't grant a fix for me (might be QEMU side)

The weird thing is that ESXi on top of ESXi doesn't seem to have that MTU limitation, might there be something on the QEMU/KVM VMXNET3 implementation?

Right now there's no support for any of the other metioned NICs in ESX v7 (e1000 was dropped). The only option seems to be VMXNET3, until somebody can create a native VirtIO driver :/
 
As a side note, I was able to test all but one device modifying the configuration file directly. If I use "e1000e", the device magically dissapears after reloading the GUI. Doesn't happen with the rest.
 
Hello!, I want to report that after upgrading to PVE7, Linux VMs with VMXNET3 now seem to work properly, but ESXi 7 still is limited to 3030 of MTU
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!