Proxmox 2.1 and Jumbo frames.

kwolf72

New Member
Aug 23, 2012
8
0
1
Hello,

We're having a problem with Proxmox 2.1 and jumbo frames. Our network work uses an MTU of 9000. We're able to get PM 2.1 configured for this MTU, and the host is fine. After creating a VM however, the MTU on the bridge interface is dropped back down to 1500, and the guest is no longer able to reliably communicate.

Here is our interfaces file:

Code:
auto lo
iface lo inet loopback


iface eth0 inet manual
    mtu 9000


iface eth1 inet manual
    mtu 9000


auto bond0
iface bond0 inet manual
    slaves eth0 eth1


auto vmbr0
iface vmbr0 inet static
    address 192.168.45.104
    netmask 255.255.255.0
    gateway 192.168.45.1
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0
    #mtu 9000
    pre-up ifconfig bond0 mtu 9000

The guest we're trying to install, though I can't see how it would matter is CentOS 5.8, 64 bit. These are KVM VM's not OpenVZ.

After starting a VM, this is what our network config looks like:

Code:
 ifconfig
bond0     Link encap:Ethernet  HWaddr d4:be:d9:b3:92:62  
          inet6 addr: fe80::d6be:d9ff:feb3:9262/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:9000  Metric:1
          RX packets:3912101 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1168381 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:816034347 (778.2 MiB)  TX bytes:237400007 (226.4 MiB)


eth0      Link encap:Ethernet  HWaddr d4:be:d9:b3:92:62  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:1952418 errors:0 dropped:0 overruns:0 frame:0
          TX packets:585542 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:407415542 (388.5 MiB)  TX bytes:118722536 (113.2 MiB)
          Interrupt:36 Memory:d6000000-d6012800 


eth1      Link encap:Ethernet  HWaddr d4:be:d9:b3:92:62  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:1959683 errors:0 dropped:0 overruns:0 frame:0
          TX packets:582839 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:408618805 (389.6 MiB)  TX bytes:118677471 (113.1 MiB)
          Interrupt:48 Memory:d8000000-d8012800 


lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:294585 errors:0 dropped:0 overruns:0 frame:0
          TX packets:294585 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:32061297 (30.5 MiB)  TX bytes:32061297 (30.5 MiB)


tap100i0  Link encap:Ethernet  HWaddr 22:6d:2d:89:e5:4c  
          inet6 addr: fe80::206d:2dff:fe89:e54c/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:2279 errors:0 dropped:0 overruns:0 frame:0
          TX packets:805343 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:175269 (171.1 KiB)  TX bytes:98670226 (94.0 MiB)


venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


vmbr0     Link encap:Ethernet  HWaddr d4:be:d9:b3:92:62  
          inet addr:192.168.45.74  Bcast:192.168.45.255  Mask:255.255.255.0
          inet6 addr: fe80::d6be:d9ff:feb3:9262/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2064884 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1165110 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:460285304 (438.9 MiB)  TX bytes:232322492 (221.5 MiB)

Before the VM is created / started, the MTU on vmbr0 is 9000.

The problem appears to be the same as http://forum.proxmox.com/threads/7877-Jumbo-Frames-on-vmbr0, but thread is a little dated. I'm hoping there's a solution. :)

Thanks,
Kevin.
 
If I remember good, the bridge take the lower mtu of the interfaces connected to bridge. (including vms interface).
So that can explain why it's change it's mtu

you can try to do a
#brtcl show vmbr0
to see the details

By the way, why do you want use mtu9000 ? for a san storage ?
If yes, using bridge it's a bad idea, it's better to use eth or bond interfaces directly.
 
We use jumbo frames for better oracle performance. Our database isn't isolated enough, and we're forced to use jumbo frames on the subnet the PM hosts are on.

This is a pretty easy issue to reproduce from what I've seen in PM 2.1, and haven't been successful in getting this to work yet.

1) have a network setup with an MTU of 9000
2) install pm 2.1, and configure the interfaces to use the 9K mtu.
3) create a vm on the new pm 2.1 host.

What I'm seeing is that VM will be unable to communicate with an packet size of over 1500.

Also of note, this was working fine for us in PM 1.9.
 
Last edited:
spirit, yes, I have tried that, but it didn't make any difference. Also the pre-up does seem to configure the bond at 9000.

snowman66, thanks, I'll take a look at that this afternoon.
 
I need to do more testing on more boxes, but I think I did get this working. Here is my pve-bridge file:

Code:
#!/usr/bin/perl -w

use strict;
use PVE::QemuServer;
use PVE::Tools qw(run_command);
use PVE::Network;

my $iface = shift;

die "no interface specified\n" if !$iface;

die "got strange interface name '$iface'\n"
    if $iface !~ m/^tap(\d+)i(\d+)$/;

my $vmid = $1;
my $netid = "net$2";

my $conf = PVE::QemuServer::load_config ($vmid);

die "unable to get network config '$netid'\n"
    if !$conf->{$netid};

my $net = PVE::QemuServer::parse_net($conf->{$netid});
die "unable to parse network config '$netid'\n" if !$net;

my $bridge = $net->{bridge};
die "unable to get bridge setting\n" if !$bridge;

my $bridgeCfg = `/sbin/ifconfig $bridge`;
die "Unable to get config for bridge: $bridge." if !$bridgeCfg;

$bridgeCfg =~ /MTU:(\d+)/;
my $bridgeMTU = $1;
die "Unable to get bridge MTU." if !$bridgeMTU;

system ("/sbin/ifconfig $iface 0.0.0.0 promisc up mtu $bridgeMTU") == 0 ||
    die "interface activation failed\n";

if ($net->{rate}) {

    my $debug = 0;
    my $rate = int($net->{rate}*1024*1024);
    my $burst = 1024*1024;

    PVE::Network::setup_tc_rate_limit($iface, $rate, $burst, $debug);
}

my $newbridge = PVE::Network::activate_bridge_vlan($bridge, $net->{tag});
PVE::Network::copy_bridge_config($bridge, $newbridge) if $bridge ne $newbridge;

system ("/usr/sbin/brctl addif $newbridge $iface") == 0 ||
    die "can't add interface to bridge\n";

exit 0;

I've only test this on one node with one vm, but so far it's always started everything with the correct MTU and I'm able to pass jumbo packets. If it breaks down as I create more VM's I'll let you know.

Thanks everyone for the help.
 
Last edited:
In my case MTU value equals the VM ID.

Code:
root@pve1:~# netstat -i to view MTU
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0       9000 0     41050      0      0 0         44041      0      0      0 BMRU
lo        16436 0     16921      0      0 0         16921      0      0      0 LRU
tap[COLOR=#008000][B]100[/B][/COLOR]i0    100 0         0      0      0 0             0      0      0      0 BMPRU
venet0     1500 0         0      0      0 0             0      0      3      0 BOPRU
[B]vmbr0       [COLOR=#008000]100[/COLOR][/B] 0     40623      0      0 0         24719      0      0      0 BMRU

Code:
root@pve1:~# ifconfig tap100i0 mtu [B][COLOR=#800080]9000[/COLOR][/B]
root@pve1:~# netstat -i to view MTU
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0       9000 0     45525      0      0 0         49613      0      0      0 BMRU
lo        16436 0     18945      0      0 0         18945      0      0      0 LRU
tap100i0   [B][COLOR=#800080]9000[/COLOR][/B] 0        41      0      0 0            70      0      0      0 BMPRU
venet0     1500 0         0      0      0 0             0      0      3      0 BOPRU
[B]vmbr0      [COLOR=#800080]9000[/COLOR] [/B]0     44977      0      0 0         26882      0      0      0 BMRU
 
It seems that the patch proposed by kwolf72 works for me.
Did you test it more deeper ? Is there a chance that it will be included in the next distributions ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!