[SOLVED] lxc & ovs: weired network behavior -after container reboot i get "dropped over-mtu packet"

markusd

Renowned Member
Apr 20, 2015
106
2
83
Dortmund
Hi,
My ovs-configured Network with two "mtu900"-enabled vlans for ceph-network works fine with kvm.

Now i have tested a standard-debian-ct with lokal storage.
When i reboot the ct i got these errors in logs:

Code:
vlan55: dropped over-mtu packet: 3014 > 1500
vlan56: dropped over-mtu packet: 2561 > 1500
(These are the ceph-vlans with mtu9000 )

my interfaces:
Code:
auto lo
iface lo inet loopback

allow-vmbr0 eth0
# 1Gbps link to core switch
iface eth0 inet manual
        ovs_bridge vmbr0
        ovs_type OVSPort
        ovs_options other_config:rstp-enable=true other_config:rstp-port-path-cost=100
        mtu 9000

allow-vmbr0 eth1
# 1Gbps link to secondary core switch
iface eth1 inet manual
        ovs_bridge vmbr0
        ovs_type OVSPort
        ovs_options other_config:rstp-enable=true other_config:rstp-port-path-cost=100
        mtu 9000

allow-vmbr0 eth4
# 10Gbps link to another proxmox/ceph node
iface eth4 inet manual
        ovs_bridge vmbr0
        ovs_type OVSPort
        ovs_options other_config:rstp-enable=true other_config:rstp-port-path-cost=2
        mtu 9000

allow-vmbr0 eth5
# 10Gbps link to another proxmox/ceph node
iface eth5 inet manual
        ovs_bridge vmbr0
        ovs_type OVSPort
        ovs_options other_config:rstp-enable=true other_config:rstp-port-path-cost=2
        mtu 9000

auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
        ovs_type OVSBridge
        ovs_ports eth0 eth1 eth4 eth5 vlan1 vlan20 vlan50 vlan55 vlan56 vlan60 vlan66 vlan67
        up ovs-vsctl set Bridge ${IFACE} rstp_enable=true other_config:rstp-priority=4096 other_config:rstp-forward-delay=4 other_config:rstp-max-age=6
        mtu 9000
        post-up sleep 10 # Wait for spanning-tree convergence

# Virtual interface to take advantage of originally untagged traffic
allow-vmbr0 vlan1
iface vlan1 inet static
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options vlan_mode=access
        ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
        address 192.168.1.15
        netmask 255.255.255.0
        gateway 192.168.1.253
        mtu 1500

#DMZ Webserver
allow-vmbr0 vlan20
iface vlan20 inet manual
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=20

# Proxmox cluster communication vlan
allow-vmbr0 vlan50
iface vlan50 inet static
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=50
        ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
        address 192.168.192.15
        netmask 255.255.255.0
        mtu 1500

# Ceph cluster communication vlan (jumbo frames)
allow-vmbr0 vlan55
iface vlan55 inet static
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=55
        ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
        address 192.168.190.15
        netmask 255.255.255.0
        mtu 9000

# Ceph public communication vlan (jumbo frames)
allow-vmbr0 vlan56
iface vlan56 inet static
        ovs_type OVSIntPort
        ovs_bridge vmbr0 
        ovs_options tag=56
        ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
        address 192.168.0.15
        netmask 255.255.255.0
        mtu 9000

# monitoring communication vlan
allow-vmbr0 vlan60
iface vlan60 inet static
        ovs_type OVSIntPort
        ovs_bridge vmbr0
        ovs_options tag=60
        ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
        address 10.0.0.15
        netmask 255.255.255.0
        mtu 1500

...

.conf from one testet ct:
Code:
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: debianct01
memory: 512
net0: name=eth0,bridge=vmbr0,gw=192.168.1.253,hwaddr=BE:AE:D1:4A:2B:D1,ip=192.168.1.144/24,type=veth
ostype: debian
rootfs: local:114/vm-114-disk-1.raw,size=8G
swap: 512

Code:
pveversion -v
proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-3 (running version: 4.3-3/557191d3)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.8-1-pve: 4.4.8-53
pve-kernel-4.4.13-2-pve: 4.4.13-58
pve-kernel-4.4.15-1-pve: 4.4.15-60
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-91
pve-firmware: 1.1-9
libpve-common-perl: 4.0-75
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-66
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.2-2
pve-container: 1.0-78
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
zfsutils: 0.6.5.7-pve10~bpo80
openvswitch-switch: 2.5.0-1
ceph: 10.2.3-1~bpo80+1

Any idea, where I have to look is greatly appreciate

Markus
 
Good morning..
There is a parameter in /etc/default/lxc :
Code:
USE_LXC_BRIDGE="false"  # overridden in lxc-net

[ ! -f /etc/default/lxc-net ] || . /etc/default/lxc-net
Do i have to change something like this for using lxc with ovs?

Thanks

Markus
 
Hi,
i added "mtu=9000" to ct-conf-files
like this:
Code:
net0: name=eth0,bridge=vmbr0,gw=192.168.1.253,hwaddr=BE:AE:D1:4A:2B:D1,ip=192.168.1.144/24,mtu=9000,type=veth
and it works like expected.

Sorry for the noise.

Markus
 
Hi, Markus.

Hi,
i added "mtu=9000" to ct-conf-files
like this:
Code:
net0: name=eth0,bridge=vmbr0,gw=192.168.1.253,hwaddr=BE:AE:D1:4A:2B:D1,ip=192.168.1.144/24,mtu=9000,type=veth
and it works like expected.

Sorry for the noise.

Markus


And what about MTU on container's eth0 ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!