OVS MTU help

Blue)(Fusion

New Member
Jun 20, 2018
2
0
1
38
Hello all, first time poster here.


I just got Gluster going on my network and am setting up a VLAN specific to Gluster storage traffic. Gluster is not hosting VM images, but VMs are accessing the Gluster data directly via the GlusterFS FUSE driver.


With that said, I have 3x bonded 10Gbit, OVS, VLANs, and after reading the documents regarding OVS, I'm confused on exactly what I do to change the MTU on only a specific VLAN - if that is at all possible with OVS.

The VLAN that needs MTU 9000 is 4 on the bond.

Below is my /etc/network/interfaces:
Code:
auto lo
iface lo inet loopback

iface enp6s0 inet manual

auto eno1
iface eno1 inet static
        address  10.1.1.10
        netmask  255.255.255.0
        gateway  10.1.1.1

iface eno3 inet manual

iface eno4 inet manual

iface enp5s0 inet manual

iface eno2 inet manual

iface enp6s0d1 inet manual

allow-vmbr0 bond0
iface bond0 inet manual
        ovs_bonds enp5s0 enp6s0 enp6s0d1
        ovs_type OVSBond
        ovs_bridge vmbr0
        ovs_options lacp=active bond_mode=balance-tcp

auto vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge
        ovs_ports bond0
 
You have to set the jumbo frames for the underlying bond0. Due to some deficiencies of the debian network setup, you have to do it with pre-up commands:

in the iface bondo:
pre-up ( ip link set mtu 9000 ... && ip link set mtu 9000 ............ )
mtu 9000

then you have to set the MTU of course inside the VM's accessing VLAN 4
 
Due to some deficiencies of the debian network setup, you have to do it with pre-up commands:

in the iface bondo:
pre-up ( ip link set mtu 9000 ... && ip link set mtu 9000 ............ )
mtu 9000

proxmox have since a long time a script to setup mtu on manual interface.
debian have fixed it in stretch.

so you can simply setup "mtu xxxx" in the manual interface.
 
Does the MTU have to be defined for a Linux bridge also that does not reference any nic (ip forwarding) and will be used for vms without setting "vlan aware"?

Code:
auto eno1.4000
iface eno1.4000 inet static
      vlan-raw-device eno1
      mtu 1400

Then create a Linux bridge and specify this vlan interface as uplink. Correct?

Code:
auto vmbr4000
iface vmbr4000 inet static
       address 192.168.100.1
       netmask 255.255.255.0
       bridge_ports eno1.4000
       bridge_stp off
       bridge_fd 0
       mtu 1400
#PVE-LAN1

Then connect a VM NIC to this interface and set the MTU to 1400 in VM network config.

Code:
auto eno1
iface eno1 inet static
       address 192.168.100.101
       netmask 255.255.255.0
       mtu 1400
#PVE-LAN1

This way direct communication between host and vms should also be possible right?
This might also be usable to split a PVE cluster across different DCs and connect them via a vswitch vlan network.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!