MTU Settings for NAS storage

Hi @Eric Thornton , welcome to the forum.

MTU size is not tied to network speed. You can use non-standard MTU values on 1 Gbit just as well as on 25 Gbit or higher links.

The key point is consistency: all devices participating in the same network path must use the same MTU. This includes all servers, switch ports, NAS systems, and any other devices operating on that network segment.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: bl1mp
Hi Eric,

just reiterating what @bbgeek17 said.

MTU of 9000 for your NAS traffic is beneficial, IFF you set your NAS storage interfaces to 9000, the ports they are connected to on the switch at 9000, and all the clients have an MTU of 9000 configured for the ports they are consuming the storage over as well.

You could configure a dedicated vlan for the NAS traffic, and configure all the interfaces connecting to it to have an MTU of 9000. Therefore you would need your underlying physical NIC to have its MTU at 9000 but you could have your normal management or internet vlan at 1500:

Below is an example /etc/network/interface, the single interface is eno1, it and the switch port it is plugged into have an MTU of 9000.

You can then see the different vlans set with different mtu values depending on their purpose. They all use the underlying physical eno1 interface. This is not ideal :-) you would have different nics, eg eno1, eno2 etc but for an example.
Bash:
auto lo
iface lo inet loopback

iface eno1 inet manual
        mtu 9000

auto vmbr0
iface vmbr0 inet static
        address 192.168.132.9/24
        gateway 192.168.132.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        mtu 1500

auto vmbr21
iface vmbr21 inet static
        address 10.10.21.9/24
        bridge-ports eno1.21
        bridge-stp off
        bridge-fd 0
        mtu 1600
#VXLAN

auto vmbr20
iface vmbr20 inet static
        address 10.10.20.9/24
        bridge-ports eno1.20
        bridge-stp off
        bridge-fd 0
        mtu 9000
#Storage

The following will show the MTU of your interface you are testing on
Bash:
 ip link show vmbr20
7: vmbr20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c0:3f:d5:67:a6:1a brd ff:ff:ff:ff:ff:ff

You can then test your configuration with the following ping command, substitute your IPs
:)
Bash:
ping -M do -s 8972 10.10.20.7
PING 10.10.20.7 (10.10.20.7) 8972(9000) bytes of data.
8980 bytes from 10.10.20.7: icmp_seq=1 ttl=64 time=0.548 ms
8980 bytes from 10.10.20.7: icmp_seq=2 ttl=64 time=0.550 ms
^C
--- 10.10.20.7 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1050ms
rtt min/avg/max/mdev = 0.548/0.549/0.550/0.001 ms
root@prox3:~# ping -M do -s 8973 10.10.20.7
PING 10.10.20.7 (10.10.20.7) 8973(9001) bytes of data.
ping: sendmsg: Message too long


The -M do flag sets the "don't fragment" bit, so the packet will fail rather than be silently fragmented. The -s 8972 sets the payload size — you use 8972 rather than 9000 because ping adds a 28-byte overhead (20 bytes IP header + 8 bytes ICMP header), bringing the total packet size to 9000 bytes. Note the (9000) in brackets in the above output.

If jumbo frames are working correctly you'll see normal replies. If not, you'll get the message "Message too long".
 
  • Like
Reactions: UdoB