MTU and Proxmox?

Dunuin

Distinguished Member
Jun 30, 2020
14,327
4,206
243
Germany
Hi,

Right now I'm rebuilding my network because I switched the servers from Gbit to 10G but I'm not sure how to optimize the MTU.

1.) Is it useful to switch from 1500 MTU to 9000 MTU jumboframes? I've heard that this would reduce the number of packets and because of that would increase the speed but it will also be bad for the latency and more packets may be corrupted.

My new switch and the 10G NICs would support jumboframes. Most of the data on my LAN should be transfered between the Proxmox Hypervisor (now with 10G over tagged VLAN), the FreeNAS server (now with 10G over tagged VLAN), my main PC (now with 10G over untagged VLAN) and the second FreeNAS server für backups (no 10G because it is only connected to the LAN over a wifi).

The network of the hypervisor looks like this:
hypervisor_net_example1.png
The NICs ens5 and eno1 are both connected to my switch. Ports on the switch are set to only allow tagged VLAN.
VLAN 42 is my DMZ, VLAN 43 my LAN and VLAN 45 is a VLAN I use so VMs in the DMZ can directly access the NAS. Because I don't want all hosts in the DMZ to be able to access the NAS and routing between VLANs wouldn't allow 10G, so I thought this additional VLAN would be a good idea.


2.)If I would give ens5 and eno1 NICs on the hypervisor a MTU of 8900, so it gets a little bit of headroom for overhead of the different protocols so it won't exceed the 9000MTU of the switch, the server is attached to, how should the MTUs of the bond, vlan and bridges look like? I would think bond0 also should be 8900. But whats about the vlan interfaces like bond0.42 and so on. They are VLAN tagging traffic from the bridges to the bond. Do these vlan interfaces need a MTU of 8896 or 8900? I would think the bridges should be 8896 because they are not vlan aware and the tagging of the vlan interfaces will add 4 bytes to each packet so the 8900 MTU of the bond and NICs won't be exceeded.

3.) What about the MTU of the guests? I think a jumboframe MTU only would make sense for interfaces that are using services provided by my NAS, like the NIC ens19 on the left VM, which is used for SMB only. Jumboframes for ens18 on the left VM wouldn't make sense because my ISPs router and DSL are only allowing packets somewhere between 1400 and 1500 bytes. So ens18 should use a MTU somewhere between 1400 and 1500 bytes?

4.) But what about the VM on the right? NIC ens18 is used for both, SMB and local services. Jumboframes would be fine for SMB but not for my local services, because most hosts in my home network are only allwong MTUs up to 1500. I would think increasing the MTU to 8896 for that NIC wouldn't be a good idea, because most hosts can accept it. Is it a good idea to add a second virtual NIC to that VM attached to a new VLAN connecting the hypervisor and the NAS so I could use one virtual NIC for SMB only and one virtual NIC for local services, like I do it with the VM on the left side?
 
Code:
1.) Is it useful to switch from 1500 MTU to 9000 MTU jumboframes? I've heard that this would reduce the number of packets and because of that would increase the speed but it will also be bad for the latency and more packets may be corrupted.
yes, that's reduce number of packets (bigger packets), so it can help here if you have old switch or old nic with a limited pps throughput. (for vms the number of packet per second is more limited too).
It's also helping if you need bigger throughput (on 10Gbits mainly), as you have lesser ip informations(ipsrc/ipdst,portsrc,port dst) to transit. (so you can reach maybe 970MB/s instead 920MB/S on a 10gbit/s link)

But that's true that is also increase latency and retransmit are bigger too if you have a bad network.


Code:
2.)If I would give ens5 and eno1 NICs on the hypervisor a MTU of 8900, so it gets a little bit of headroom for overhead of the different protocols so it won't exceed the 9000MTU of the switch, the server is attached to, how should the MTUs of the bond, vlan and bridges look like? I would think bond0 also should be 8900. But whats about the vlan interfaces like bond0.42 and so on. They are VLAN tagging traffic from the bridges to the bond. Do these vlan interfaces need a MTU of 8896 or 8900? I would think the bridges should be 8896 because they are not vlan aware and the tagging of the vlan interfaces will add 4 bytes to each packet so the 8900 MTU of the bond and NICs won't be exceeded.

for the vlan overhead, generally the linux stack is able to handle here without mtu change. (I don't remember exactly how).
so mtu9000 should work, and physical switch generally use around 9200 for jumbo frame.

(If you need extra encapsulation layer like vxlan,ipsec/wireguard, you can reduce a little bit the mtu around 80-100bytes)



Code:
3.) What about the MTU of the guests? I think a jumboframe MTU only would make sense for interfaces that are using services provided by my NAS, like the NIC ens19 on the left VM, which is used for SMB only. Jumboframes for ens18 on the left VM wouldn't make sense because my ISPs router and DSL are only allowing packets somewhere between 1400 and 1500 bytes. So ens18 should use a MTU somewhere between 1400 and 1500 bytes?

4.) But what about the VM on the right? NIC ens18 is used for both, SMB and local services. Jumboframes would be fine for SMB but not for my local services, because most hosts in my home network are only allwong MTUs up to 1500. I would think increasing the MTU to 8896 for that NIC wouldn't be a good idea, because most hosts can accept it. Is it a good idea to add a second virtual NIC to that VM attached to a new VLAN connecting the hypervisor and the NAS so I could use one virtual NIC for SMB only and one virtual NIC for local services, like I do it with the VM on the left side?

if you want to use a nas with mtu 9000 inside your vm , and vm is also connecting to the net (1500 for the routers on internet), you need to use 2 nics in your vms. (generally main nic with mtu 1500 with default gw, and second nic with mtu 9000 on the same vlan/subnet than your nas)


What is important is the mtu on the interface where is defined the ip address.

you can setup mtu 9000 everywhere on your network (physical switch, proxmox bridge, proxmox nics,....) , and setup 1500 mtu in your vm guest ok, it'll use mtu 1500. (and it'll not be fragmented, because your network accept mtu up to 9000).
you could also setup 2 vlans inside your guest vm os, with 2 differents mtu, it'll works too.
 
  • Like
Reactions: Dunuin
Hello,

We have 10GB Network and ower migrate speed betwin 2 nodes is only 2.4G i dont know why so i started to change MTU on Switch, Router but not on Proxmox Server cuz i dont understand where :)

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

iface enp4s0f0 inet manual

iface enp4s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
address 89.XX.XX.XX/24
gateway 89.XX.XX.XX
bridge-ports enp4s0f0
bridge-stp off
bridge-fd 0


enp4s0f0 > 10GB Network

root@XXXXX:~# ip a s enp4s0f0
6: enp4s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether a0:1d:48:73:5c:a0 brd ff:ff:ff:ff:ff:ff


Any help please ?
 
Code:
auto enp4s0f0
iface enp4s0f0 inet manual
    mtu 9000

auto vmbr0
iface vmbr0 inet static
    address 89.XX.XX.XX/24
    gateway 89.XX.XX.XX
    bridge-ports enp4s0f0
    bridge-stp off 
    bridge-fd 0
    mtu 9000
Thank you
 
Code:
auto enp4s0f0
iface enp4s0f0 inet manual
    mtu 9000

auto vmbr0
iface vmbr0 inet static
    address 89.XX.XX.XX/24
    gateway 89.XX.XX.XX
    bridge-ports enp4s0f0
    bridge-stp off 
    bridge-fd 0
    mtu 9000


After i made the changes the network speed is under 1mb :| i switch back to 1GB cable is working fine.
 
Switching from 1500 to 9000MTU increased my throughput. Both tests are using the same hosts connected over the same tagged VLAN connection.

10Gbit NIC with VLAN interface set to 9000 MTU:
Code:
iperf3 -c 192.168.49.X
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.14 GBytes  9.79 Gbits/sec    0   1.26 MBytes
[  5]   1.00-2.00   sec  1.15 GBytes  9.89 Gbits/sec    0   1.33 MBytes
[  5]   2.00-3.00   sec  1.15 GBytes  9.89 Gbits/sec    0   1.33 MBytes
[  5]   3.00-4.00   sec  1.15 GBytes  9.85 Gbits/sec    0   1.36 MBytes
[  5]   4.00-5.00   sec  1.15 GBytes  9.89 Gbits/sec    0   1.36 MBytes
[  5]   5.00-6.00   sec  1.15 GBytes  9.88 Gbits/sec    0   1.36 MBytes
[  5]   6.00-7.00   sec  1.15 GBytes  9.89 Gbits/sec    0   1.36 MBytes
[  5]   7.00-8.00   sec  1.15 GBytes  9.89 Gbits/sec    0   1.36 MBytes
[  5]   8.00-9.00   sec  1.15 GBytes  9.89 Gbits/sec    0   1.36 MBytes
[  5]   9.00-10.00  sec  1.15 GBytes  9.89 Gbits/sec    0   1.36 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.5 GBytes  9.88 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  11.5 GBytes  9.87 Gbits/sec                  receive

10Gbit NIC with VLAN interface set to 1500 MTU:
Code:
iperf3 -c 192.168.43.X
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   857 MBytes  7.19 Gbits/sec  778   1.29 MBytes
[  5]   1.00-2.00   sec   905 MBytes  7.59 Gbits/sec  774   1.22 MBytes
[  5]   2.00-3.00   sec   892 MBytes  7.49 Gbits/sec  774   1.13 MBytes
[  5]   3.00-4.00   sec   899 MBytes  7.54 Gbits/sec  778   1.07 MBytes
[  5]   4.00-5.00   sec   901 MBytes  7.56 Gbits/sec  784   1.00 MBytes
[  5]   5.00-6.00   sec   902 MBytes  7.57 Gbits/sec  791    892 KBytes
[  5]   6.00-7.00   sec   900 MBytes  7.55 Gbits/sec  780    870 KBytes
[  5]   7.00-8.00   sec   879 MBytes  7.37 Gbits/sec    0   1.31 MBytes
[  5]   8.00-9.00   sec   900 MBytes  7.55 Gbits/sec  1560    872 KBytes
[  5]   9.00-10.00  sec   909 MBytes  7.62 Gbits/sec    0   1.35 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  8.74 GBytes  7.50 Gbits/sec  7019             sender
[  5]   0.00-10.00  sec  8.73 GBytes  7.50 Gbits/sec                  receiver

So using 9000 MTU is using the full 10Gbit and 1500 MTU is 2.5 Gbit slower.
Does someone know why so many packets get lost on the second test?

Did that test again and now without lost packets:
Code:
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   856 MBytes  7.18 Gbits/sec    0   1.05 MBytes
[  5]   1.00-2.00   sec   895 MBytes  7.51 Gbits/sec    0   1.05 MBytes
[  5]   2.00-3.00   sec   890 MBytes  7.47 Gbits/sec    0   1.10 MBytes
[  5]   3.00-4.00   sec   896 MBytes  7.51 Gbits/sec    0   1.10 MBytes
[  5]   4.00-5.00   sec   895 MBytes  7.51 Gbits/sec    0   1.10 MBytes
[  5]   5.00-6.00   sec   879 MBytes  7.37 Gbits/sec    0   1.10 MBytes
[  5]   6.00-7.00   sec   892 MBytes  7.49 Gbits/sec    0   1.10 MBytes
[  5]   7.00-8.00   sec   895 MBytes  7.51 Gbits/sec    0   1.10 MBytes
[  5]   8.00-9.00   sec   896 MBytes  7.52 Gbits/sec    0   1.10 MBytes
[  5]   9.00-10.00  sec   898 MBytes  7.53 Gbits/sec    0   1.10 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  8.68 GBytes  7.46 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  8.68 GBytes  7.46 Gbits/sec                  receiver

UDP performance is way lower.

10G UDP with 1500 MTU:
Code:
iperf3 -c 192.168.43.X -u -b 10G
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec   208 MBytes  1.75 Gbits/sec  150654
[  5]   1.00-2.00   sec   242 MBytes  2.03 Gbits/sec  175327
[  5]   2.00-3.00   sec   244 MBytes  2.04 Gbits/sec  176407
[  5]   3.00-4.00   sec   248 MBytes  2.08 Gbits/sec  179714
[  5]   4.00-5.00   sec   246 MBytes  2.07 Gbits/sec  178355
[  5]   5.00-6.00   sec   248 MBytes  2.08 Gbits/sec  179808
[  5]   6.00-7.00   sec   248 MBytes  2.08 Gbits/sec  179421
[  5]   7.00-8.00   sec   251 MBytes  2.10 Gbits/sec  181611
[  5]   8.00-9.00   sec   244 MBytes  2.05 Gbits/sec  176668
[  5]   9.00-10.00  sec   251 MBytes  2.10 Gbits/sec  181669
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  2.37 GBytes  2.04 Gbits/sec  0.000 ms  0/1759634 (0%)  sender
[  5]   0.00-10.00  sec  2.37 GBytes  2.04 Gbits/sec  0.003 ms  649/1759634 (0.037%)  receiver

10G UDP with 9000 MTU:
Code:
iperf3 -c 192.168.49.X -u -b 10G
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec   596 MBytes  5.00 Gbits/sec  69788
[  5]   1.00-2.00   sec   570 MBytes  4.78 Gbits/sec  66790
[  5]   2.00-3.00   sec   597 MBytes  5.01 Gbits/sec  69938
[  5]   3.00-4.00   sec   606 MBytes  5.09 Gbits/sec  71045
[  5]   4.00-5.00   sec   594 MBytes  4.98 Gbits/sec  69552
[  5]   5.00-6.00   sec   601 MBytes  5.04 Gbits/sec  70370
[  5]   6.00-7.00   sec   571 MBytes  4.79 Gbits/sec  66943
[  5]   7.00-8.00   sec   808 MBytes  6.78 Gbits/sec  94666
[  5]   8.00-9.00   sec   819 MBytes  6.87 Gbits/sec  95990
[  5]   9.00-10.00  sec   646 MBytes  5.42 Gbits/sec  75691
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  6.26 GBytes  5.37 Gbits/sec  0.000 ms  0/750773 (0%)  sender
[  5]   0.00-10.00  sec  6.26 GBytes  5.37 Gbits/sec  0.005 ms  33/750772 (0.0044%)  receiver

So with UDP 9000 MTU is way faster.

Edit:
Maybe the remote FreeNAS host is the limiting factor. Its the same NIC but if I run iperf3 with 1500 MTU one CPU core goes up to 100% and I get these messages: kernel: Limiting open port RST response from 943 to 200 packets/sec
If I run iperf3 with 9000 MTU no core goes higher than 40% usage.
 
Last edited:
Hello good evening, webmin was installed on the base and the user removed the mtu from the default. In short, the network was lost. How to revert? Is there a way to restore this configuration in proxmox?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!