Enable MTU 9000 Jumbo Frames

Hi!
Edit /etc/network/interface like this (add post-up command):

auto eth4
iface eth4 inet static
address 192.168.110.4
netmask 255.255.255.240
post-up ifconfig eth4 mtu 9000

After this restart interface, for example:

# ifdown eth4 && ifup eth4
 
you need openvswitch for that (linux bridges don't support mtu > 1500).
linux bridges take the smallest mtu of their slave, so if you set the mtu on the nic, the bridge should automatically get this mtu
@gosha your comment should work
 
This is my interfaces file

auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
address 172.23.37.205
netmask 255.255.248.0
gateway 172.23.32.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

I do not have static ip, it is dynamic by dhcp,

How do I add the post-up command?
 
auto vmbr0
iface vmbr0 inet static
address 192.168.59.190
netmask 255.255.255.0
gateway 192.168.59.191
bridge_ports eth0
bridge_stp off
bridge_fd 0
up ifconfig eth0 mtu 4000 || true
 
Not sure if this is the proper way but this works for me

auto vmbr2
iface vmbr2 inet static
address 172.16.0.2
netmask 255.255.255.0
bridge_ports eth2
bridge_stp off
bridge_fd 0
pre-up ifconfig eth2 mtu 9000
 
Nothing posted here worked for me. I suspect it is because ifconfig is deprecated in the version of debian used in Proxmox VE5.1

What DID work was adding the following to /etc/network/interface under the bridge:

ip link set dev eth0 mtu 9000 (eth0 is the name of the network interface, change accordingly)
 
  • Like
Reactions: Adam Smith
For the avoidance of doubt for anyone else coming to this topic, this is the full command I used in blue. Note I don't bother with the "dev" command in ip link, it seems redundant:

iface vmbr1 inet manual
bridge_ports enp1s0
bridge_stp off
bridge_fd 0
pre-up ip link set <interface name> mtu 9000


Replace the <interface name> with whatever the name of your interface is. For me this is enp1s0.
 
  • Like
Reactions: Adam Smith
I have the following bonded LACP setup.

auto bond2
iface bond2 inet static
address 172.16.4.252
netmask 24
bond-slaves enp130s0f0 enp130s0f1
bond-miimon 100
bond-mode 802.3ad
pre-up ip link set enp130s0f0 mtu 9000 && ip link set enp130s0f1 mtu 9000 && ip link set bond2 mtu 9000
#Backup Network


Setting the MTU per the above post does not work when I use pre-up but does work when I use post-up like below. Are there any issue doing it one way vs the other?

post-up ip link set enp130s0f0 mtu 9000 && ip link set enp130s0f1 mtu 9000 && ip link set bond2 mtu 9000

Thanks,
Eric
 
I have the following bonded LACP setup.

auto bond2
iface bond2 inet static
address 172.16.4.252
netmask 24
bond-slaves enp130s0f0 enp130s0f1
bond-miimon 100
bond-mode 802.3ad
pre-up ip link set enp130s0f0 mtu 9000 && ip link set enp130s0f1 mtu 9000 && ip link set bond2 mtu 9000
#Backup Network


Setting the MTU per the above post does not work when I use pre-up but does work when I use post-up like below. Are there any issue doing it one way vs the other?

post-up ip link set enp130s0f0 mtu 9000 && ip link set enp130s0f1 mtu 9000 && ip link set bond2 mtu 9000

Thanks,
Eric

Hi,
I don't known why users want to use pre-up,postup....

you can simply use

Code:
iface  enp130s0f0 in manual
    ......
    mtu 9000

iface bond2 inet static
     ....
    mtu 9000
 
Hi,
I don't known why users want to use pre-up,postup....

you can simply use

Code:
iface  enp130s0f0 in manual
    ......
    mtu 9000

iface bond2 inet static
     ....
    mtu 9000

I second this.....I use the mtu 9000 after every interface and it works great.
 
Just a quick feedback for those hitting this page from a search engine. Enabling jumbo frames does bring performance improvements, especially on latency and responsiveness of the system. Overall usage of the management interface and the guests themselves are all more "snappy" with mtu set to 9000.

My environment at the moment of writing this:
- cluster of 3 nodes running PVE 7.3-6, i5-6600T CPU and Intel I219-LM 1GBit nics
- TP-Link T2600G-28MPS managed switch, supporting usage of jumbo frames on selected ports
- Synology DS416play NAS with jumbo frames (mtu 9000) enabled on the 1GBit port participating in the PVE network (this NAS provides a shared NFS storage to the cluster nodes)

PVE network config to enable jumbo is simply just
Bash:
...
iface eno1 inet manual
        mtu 9000

auto vmbr0
iface vmbr0 inet static
        mtu 9000
        ...

Speeds of live migration between nodes increased with 50% on average (of course with "type=insecure" at this CPU class)
 
Last edited:
I second this.....I use the mtu 9000 after every interface and it works great.

agreed.

However, pre-up can be useful if you want to make sure the individual members of the bond are brought up before the bond is, for example.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!