vmxnet3 MTU bug? Patch available?

cjones

Active Member
Jun 28, 2019
27
1
43
I am working with nested ESXi 7.0, which I have found is only compatible with the vmxnet3 NIC. When powering on the VM, it eventually crashes quick enough that it doesn't even fully boot and I see this event in syslog:

QEMU[1560506]: kvm: ../hw/net/vmxnet3.c:1444: vmxnet3_activate_device: Assertion `VMXNET3_MIN_MTU <= s->mtu && s->mtu < VMXNET3_MAX_MTU' failed.

I've done some research on this, and it seems like a bug with the vmxnet3 driver that has an available patch. Has anybody else come across this? Is there a workaround to use while waiting for the patch to be implemented? Is there an ETA on applying the patch?

Thanks,
cjones
 
Hi,
the commit you mention is actually already included in pve-qemu-kvm >= 6.2.0. The assertion was introduced to protect against the integer overflow from the bug report. The fact that the assertion fails means that the s->mtu value is not in a valid range. The limits are defined here, and please note that the upper limit is excluded by the assert.
 
Hi,
the commit you mention is actually already included in pve-qemu-kvm >= 6.2.0. The assertion was introduced to protect against the integer overflow from the bug report. The fact that the assertion fails means that the s->mtu value is not in a valid range. The limits are defined here, and please note that the upper limit is excluded by the assert.
Ok, so forgive my naivety to the terminology and what it all means. The vmxnet3 NIC I have attached is connected to a bridge that has an MTU of 9000. Does that mean I've exceeded the limit? Or it's ignored following your last statement that it "is excluded by the assert"?
 
Ok, so forgive my naivety to the terminology and what it all means. The vmxnet3 NIC I have attached is connected to a bridge that has an MTU of 9000. Does that mean I've exceeded the limit? Or it's ignored following your last statement that it "is excluded by the assert"?
Yes, the assertion will fail if the value is bigger or equal to VMXNET3_MAX_MTU, which is 9000. But I think excluding that value might actually be a typo. I tested a patch allowing 9000 and it seems to work fine. I'll ask on the QEMU developer mailing list, and if it is indeed a typo, we'll include a fix in a future version. For now, I'm afraid you'll have to use a smaller value.

EDIT: Mail on the QEMU developer list
 
Last edited:
  • Like
Reactions: Neobin
Yes, the assertion will fail if the value is bigger or equal to VMXNET3_MAX_MTU, which is 9000. But I think excluding that value might actually be a typo. I tested a patch allowing 9000 and it seems to work fine. I'll ask on the QEMU developer mailing list, and if it is indeed a typo, we'll include a fix in a future version. For now, I'm afraid you'll have to use a smaller value.

EDIT: Mail on the QEMU developer list
Thank you for putting this in front of the devs. I changed the MTU to 8999 and the VM booted just fine.

Just for clarification for anybody coming across this post, I was initially mistaken that the MTU was controlled from the Proxmox bridge. It's not. It's controlled from within the guest. So, in this case, I had to change the MTU on the nested ESXi virtual switch. Verified with esxcli network nic list.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!