Hi,
Right now I'm rebuilding my network because I switched the servers from Gbit to 10G but I'm not sure how to optimize the MTU.
1.) Is it useful to switch from 1500 MTU to 9000 MTU jumboframes? I've heard that this would reduce the number of packets and because of that would increase the speed but it will also be bad for the latency and more packets may be corrupted.
My new switch and the 10G NICs would support jumboframes. Most of the data on my LAN should be transfered between the Proxmox Hypervisor (now with 10G over tagged VLAN), the FreeNAS server (now with 10G over tagged VLAN), my main PC (now with 10G over untagged VLAN) and the second FreeNAS server für backups (no 10G because it is only connected to the LAN over a wifi).
The network of the hypervisor looks like this:
The NICs ens5 and eno1 are both connected to my switch. Ports on the switch are set to only allow tagged VLAN.
VLAN 42 is my DMZ, VLAN 43 my LAN and VLAN 45 is a VLAN I use so VMs in the DMZ can directly access the NAS. Because I don't want all hosts in the DMZ to be able to access the NAS and routing between VLANs wouldn't allow 10G, so I thought this additional VLAN would be a good idea.
2.)If I would give ens5 and eno1 NICs on the hypervisor a MTU of 8900, so it gets a little bit of headroom for overhead of the different protocols so it won't exceed the 9000MTU of the switch, the server is attached to, how should the MTUs of the bond, vlan and bridges look like? I would think bond0 also should be 8900. But whats about the vlan interfaces like bond0.42 and so on. They are VLAN tagging traffic from the bridges to the bond. Do these vlan interfaces need a MTU of 8896 or 8900? I would think the bridges should be 8896 because they are not vlan aware and the tagging of the vlan interfaces will add 4 bytes to each packet so the 8900 MTU of the bond and NICs won't be exceeded.
3.) What about the MTU of the guests? I think a jumboframe MTU only would make sense for interfaces that are using services provided by my NAS, like the NIC ens19 on the left VM, which is used for SMB only. Jumboframes for ens18 on the left VM wouldn't make sense because my ISPs router and DSL are only allowing packets somewhere between 1400 and 1500 bytes. So ens18 should use a MTU somewhere between 1400 and 1500 bytes?
4.) But what about the VM on the right? NIC ens18 is used for both, SMB and local services. Jumboframes would be fine for SMB but not for my local services, because most hosts in my home network are only allwong MTUs up to 1500. I would think increasing the MTU to 8896 for that NIC wouldn't be a good idea, because most hosts can accept it. Is it a good idea to add a second virtual NIC to that VM attached to a new VLAN connecting the hypervisor and the NAS so I could use one virtual NIC for SMB only and one virtual NIC for local services, like I do it with the VM on the left side?
Right now I'm rebuilding my network because I switched the servers from Gbit to 10G but I'm not sure how to optimize the MTU.
1.) Is it useful to switch from 1500 MTU to 9000 MTU jumboframes? I've heard that this would reduce the number of packets and because of that would increase the speed but it will also be bad for the latency and more packets may be corrupted.
My new switch and the 10G NICs would support jumboframes. Most of the data on my LAN should be transfered between the Proxmox Hypervisor (now with 10G over tagged VLAN), the FreeNAS server (now with 10G over tagged VLAN), my main PC (now with 10G over untagged VLAN) and the second FreeNAS server für backups (no 10G because it is only connected to the LAN over a wifi).
The network of the hypervisor looks like this:
The NICs ens5 and eno1 are both connected to my switch. Ports on the switch are set to only allow tagged VLAN.
VLAN 42 is my DMZ, VLAN 43 my LAN and VLAN 45 is a VLAN I use so VMs in the DMZ can directly access the NAS. Because I don't want all hosts in the DMZ to be able to access the NAS and routing between VLANs wouldn't allow 10G, so I thought this additional VLAN would be a good idea.
2.)If I would give ens5 and eno1 NICs on the hypervisor a MTU of 8900, so it gets a little bit of headroom for overhead of the different protocols so it won't exceed the 9000MTU of the switch, the server is attached to, how should the MTUs of the bond, vlan and bridges look like? I would think bond0 also should be 8900. But whats about the vlan interfaces like bond0.42 and so on. They are VLAN tagging traffic from the bridges to the bond. Do these vlan interfaces need a MTU of 8896 or 8900? I would think the bridges should be 8896 because they are not vlan aware and the tagging of the vlan interfaces will add 4 bytes to each packet so the 8900 MTU of the bond and NICs won't be exceeded.
3.) What about the MTU of the guests? I think a jumboframe MTU only would make sense for interfaces that are using services provided by my NAS, like the NIC ens19 on the left VM, which is used for SMB only. Jumboframes for ens18 on the left VM wouldn't make sense because my ISPs router and DSL are only allowing packets somewhere between 1400 and 1500 bytes. So ens18 should use a MTU somewhere between 1400 and 1500 bytes?
4.) But what about the VM on the right? NIC ens18 is used for both, SMB and local services. Jumboframes would be fine for SMB but not for my local services, because most hosts in my home network are only allwong MTUs up to 1500. I would think increasing the MTU to 8896 for that NIC wouldn't be a good idea, because most hosts can accept it. Is it a good idea to add a second virtual NIC to that VM attached to a new VLAN connecting the hypervisor and the NAS so I could use one virtual NIC for SMB only and one virtual NIC for local services, like I do it with the VM on the left side?