Creation of veth interface drops MTU

eswood

New Member
Apr 12, 2011
13
0
1
Hi All,

Just upgraded to 3.1 and am ironing out the final wrinkle. We have an interface with an MTU of 9000 on our servers. It is the sole interface in a bridge which happily picks up that MTU on boot. It all works fine until an openvz container is started with a veth interface on the bridge in question (that took some trial and error to isolate!). At that point, the MTU of the veth interface is set at 1500 and the bridge's MTU also drops. The MTU on the physical card stays the same. Is there any way to avoid this and retain my Jumbo Frames?

As always, thanks in advance for any tips or possible avenues for investigation.

Regards,
Scott

That was the short question. Here's what I believe is all of the necessary supplementary material to troubleshoot:

Networking info
server:~# cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


iface eth2 inet manual


iface eth3 inet manual


auto vmbr0
iface vmbr0 inet static
address 10.11.12.13
netmask 255.255.255.192
gateway 10.11.12.1
bridge_ports eth0
bridge_stp off
bridge_fd 0


auto vmbr1
iface vmbr1 inet static
address 192.168.127.101
netmask 255.255.255.0
bridge_ports eth2
bridge_stp off
bridge_fd 0
mtu 9000
pre-up ifconfig eth2 mtu 9000

OpenVZ config file:
server:/var/log# cat /etc/pve/openvz/104.conf
ONBOOT="no"


PHYSPAGES="0:512M"
SWAPPAGES="0:512M"
KMEMSIZE="232M:256M"
DCACHESIZE="116M:128M"
LOCKEDPAGES="256M"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
OOMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
NUMSIGINFO="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
NUMFILE="unlimited"
NUMIPTENT="unlimited"


# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="4G:4613734"
DISKINODES="800000:880000"
QUOTATIME="0"
QUOTAUGIDLIMIT="0"


# CPU fair scheduler parameter
CPUUNITS="1000"
CPUS="1"
HOSTNAME="testmtu.our.domain"
SEARCHDOMAIN="our.domain"
NAMESERVER="10.11.12.2 10.11.12.3"
IP_ADDRESS=""
VE_ROOT="/var/lib/vz/root/$VEID"
VE_PRIVATE="/var/lib/vz/private/104"
OSTEMPLATE="centos-6-standard_6.3-1_amd64.tar.gz"
NETIF="ifname=eth0,bridge=vmbr0,mac=BA:DC:DB:B5:FC:57,host_ifname=veth104.0,host_mac=3A:34:B3:55:53:F1;ifname=eth1,bridge=vmbr1,mac=72:72:D3:66:64:C4,host_ifname=veth104.1,host_mac=6E:7D:D8:5D:A7:27"

ifconfig before starting the OpenVZ container:
server:~# ifconfig
eth0 Link encap:Ethernet HWaddr 00:11:00:11:00:11
inet6 addr: fe80::2e0:edff:fe1d:9932/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1229590 errors:0 dropped:0 overruns:0 frame:0
TX packets:1213481 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1445999450 (1.3 GiB) TX bytes:1448423402 (1.3 GiB)


eth2 Link encap:Ethernet HWaddr 00:22:00:22:00:22
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:60208 errors:0 dropped:0 overruns:0 frame:0
TX packets:8130 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6035592 (5.7 MiB) TX bytes:978544 (955.6 KiB)


lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:6628 errors:0 dropped:0 overruns:0 frame:0
TX packets:6628 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3285208 (3.1 MiB) TX bytes:3285208 (3.1 MiB)


venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::1/128 Scope:Link
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)


vmbr0 Link encap:Ethernet HWaddr 00:11:00:11:00:11
inet addr:10.11.12.13 Bcast:10.11.12.63 Mask:255.255.255.192
inet6 addr: fe80::2e0:edff:fe1d:9932/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:352087 errors:0 dropped:0 overruns:0 frame:0
TX packets:389370 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1383153437 (1.2 GiB) TX bytes:1394041018 (1.2 GiB)


vmbr1 Link encap:Ethernet HWaddr 00:22:00:22:00:22
inet addr:192.168.127.101 Bcast:192.168.127.255 Mask:255.255.255.0
inet6 addr: fe80::2e0:edff:fe1d:9933/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:9688 errors:0 dropped:0 overruns:0 frame:0
TX packets:8123 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1421832 (1.3 MiB) TX bytes:978166 (955.2 KiB)

ifconfig after starting the OpenVZ container:
server:~# ifconfig
eth0 Link encap:Ethernet HWaddr 00:11:00:11:00:11
inet6 addr: fe80::2e0:edff:fe1d:9932/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:1241389 errors:0 dropped:0 overruns:0 frame:0
TX packets:1223013 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1448419481 (1.3 GiB) TX bytes:1450271764 (1.3 GiB)


eth2 Link encap:Ethernet HWaddr 00:22:00:22:00:22
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1

RX packets:66617 errors:0 dropped:0 overruns:0 frame:0
TX packets:9181 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6685996 (6.3 MiB) TX bytes:1119726 (1.0 MiB)


lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:7496 errors:0 dropped:0 overruns:0 frame:0
TX packets:7496 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3709875 (3.5 MiB) TX bytes:3709875 (3.5 MiB)


venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::1/128 Scope:Link
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)


veth104.0 Link encap:Ethernet HWaddr 3a:34:b3:55:53:f1
inet6 addr: fe80::3834:b3ff:fe55:53f1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:13 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)


veth104.1 Link encap:Ethernet HWaddr 6e:7d:d8:5d:a7:27
inet6 addr: fe80::6c7d:d8ff:fe5d:a727/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:21 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)


vmbr0 Link encap:Ethernet HWaddr 00:11:00:11:00:11
inet addr:10.11.12.13 Bcast:10.11.12.63 Mask:255.255.255.192
inet6 addr: fe80::2e0:edff:fe1d:9932/64 Scope:Linkk
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:363855 errors:0 dropped:0 overruns:0 frame:0
TX packets:398888 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1385406862 (1.2 GiB) TX bytes:1395888456 (1.3 GiB)


vmbr1 Link encap:Ethernet HWaddr 00:22:00:22:00:22
inet addr:192.168.127.101 Bcast:192.168.127.255 Mask:255.255.255.0
inet6 addr: fe80::2e0:edff:fe1d:9933/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10931 errors:0 dropped:0 overruns:0 frame:0
TX packets:9174 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1594990 (1.5 MiB) TX bytes:1119348 (1.0 MiB)
 
Last edited:
Thanks! We'll watch the changelogs for when it makes it to release.

Regards,
Scott
 
Even better... Got the fix from git and applied it. All is well, Thank you very much!

Scott
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!