Set MTU on Guest

I have this patches work on production more than 6 month for now without any cases, is there any reason why this patches is not merged to oficial code tree, maybe we can do something to speed up approval procedure?
 
  • Like
Reactions: AlexanderR
BUMP

Setting MTU can be more useful than just hetzner. Leaving this to the guest is pretty unclean.

Looks like Mr. Dietmar reject this
https://pve.proxmox.com/pipermail/pve-devel/2018-August/033558.html
As he dislikes device specific options.

That makes no sense. While i can see where he is comming from, this logic would practically deny any improvement on a virtual device over its hardware sibling.
So the common feature set will always be the oldest smallest featureset in a device category, that is if we follow that blindly.

setting mtus isnt something from outer space, its a regular thing you often need in vlans and vpns. clinet side manual tweaking doesnt suite that well as it is a per network requirement not per vm


@spirit
do you have updates versions of that patch for latest promox ?
 
  • Like
Reactions: Tacid
I have a patched version that has been working on production for more than a year now, and a little over a month on PVE 6.0.
I haven't had any problems with this particular patch so far, so I think it's quite reliable. You just need to remember to apply patches after each PVE update. (I have made a post-invoke script for dpkg that patches the right files and set up dpkg-divert)

P.S. My patches that works on PVE 6 is in attachments
 

Attachments

Last edited:
I have a patched version that has been working on production for more than a year now, and a little over a month on PVE 6.0.
I haven't had any problems with this particular patch so far, so I think it's quite reliable. You just need to remember to apply patches after each PVE update. (I have made a post-invoke script for dpkg that patches the right files and set up dpkg-divert)

P.S. My patches that works on PVE 6 is in attachments

Works like a charme! Please add patch to official proxmox version.
 
I'm setting my guest VM's MTU to 1400 via DHCP.
Option is 26. Type Unsigned 16-bit integer and value is 1400.
I'm running DHCP on a pfsense cluster, but it should work on other DHCP servers as well.

Best regards
Sebastian
 
Hi

I have some VMs (with VirtIO)
you can try to hack

/usr/share/perl5/PVE/QemuServer.pm

sub print_netdevice_full {
...
$tmpstr .= ",bootindex=$net->{bootindex}" if $net->{bootindex} ;
$tmpstr .= ",host_mtu=1400" if $net->{model} eq 'virtio';

then

systemctl restart pvedaemon


and start your vm.

Hi,

I have this setup and works like a charm, but... some VMs with virtio gets mtu:1400 but others VMs also with virtio did not (maintains the default mtu:1500). How can add individual mtu parameter to /etc/pve/qemu-server/xxx.conf ?

Thanks
 
I've applied the patch and restarted pveproxy, but I get:
Error
Parameter verification failed. (400)
net0: invalid format - format error net0.mtu: property is not defined in schema and the schema does not allow additional properties

What else needs a restart to make it work? [edit: answering my own question]: pvedaemon needs to be restarted

Also in the mailing list, there's 3 files patched but only 2 here... what's different?
(I cannot see the attachments using the mailing list archive...)

Thanks.
 
It seems the last update to pve-manager (6.1-8) don't work proper even after re-applying the patches.
I now get a "Unable to parse network options" in the GUI trying to open Hardware -> Network Device (net0).

[edit] Reverted pve-manager to 6.1-7 and applied your patch for pvemanagerlib.js and it's back to normal.
 
Last edited:
Your latest patch works on pve-manager >= 6.1-8 ?
If so, could you also paste a link here I'd like to manually patch in the mean time... worried about staying pinned to 6.1-7 while the rest of proxmox is updated frequently.
Thanks
 
I don't understand why this was not integrated yet. There are clearly multiple use cases for this. It can be a hidden option (advanced checkbox) just like with the NICs for the host.

Thanks
 
  • Like
Reactions: AlexanderR