OVS Bridge -> OVS Bond -> 2x Ethernet set MTU

liszca

Well-Known Member
May 8, 2020
81
1
48
23
I am not sure how to calculate my MTU on each of the Devices.

Lets assume my Network Switch I did right (I have doubts but lets look at the OVS side)

What I did: Every MTU involved in the Ceph Backend MTU is set to 3000.

What I want to achieve is reduce the load on the CPU even its only 2x1 Gb each bond.

Right now ceph isn't complaining but certainty on my side is missing.

when choosing the bond mode is "LACP (balance-TCP)" the same as "transmit-hash-policy=layer-3-and-4" or is "transmit-hash-policy=layer-2-and-3" to choose?

Code:
auto lo
iface lo inet loopback

iface enxf4b52021da43 inet manual

auto enx002655d16468
iface enx002655d16468 inet manual
    mtu 3000

auto enx002655d16469
iface enx002655d16469 inet manual
    mtu 3000

auto bond0
iface bond0 inet manual
    ovs_bonds enx002655d16468 enx002655d16469
    ovs_type OVSBond
    ovs_bridge vmbr1
    ovs_mtu 3000
    ovs_options lacp=active bond_mode=balance-tcp

auto vmbr0
iface vmbr0 inet static
    address 192.168.0.10/24
    gateway 192.168.0.1
    bridge-ports enxf4b52021da43
    bridge-stp off
    bridge-fd 0

auto vmbr1
iface vmbr1 inet static
    address 10.10.0.10/24
    ovs_type OVSBridge
    ovs_ports bond0
    ovs_mtu 3000
 
Last edited:
For a 2x1G bond you don't need to change MTU at all and the possible CPU benefits will be negligible.

You can check if MTU is working with ping -M do -s 2972, which should work among the Ceph IPs of your hosts. Run it on every host, as MTU is applied on sending server.
 
For a 2x1G bond you don't need to change MTU at all and the possible CPU benefits will be negligible.

You can check if MTU is working with ping -M do -s 2972, which should work among the Ceph IPs of your hosts. Run it on every host, as MTU is applied on sending server.
I didn't know MTU can be checked with a simple Ping command :)