Jumbo Frames on vmbr0?

wetwilly

New Member
Apr 4, 2010
28
0
1
Hello.

I've been trying to get up jumbo frames properly but I'm encountering some weird issues. This is how my interfaces look

Code:
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual

auto bond0
iface bond0 inet manual
        slaves eth1 eth2
        bond_miimon 100
        bond_mode 802.3ad

auto vlan10
iface vlan10 inet manual
        vlan_raw_device bond0
        pre-up ifconfig bond0 mtu 9000

auto vlan2
iface vlan2 inet manual
        vlan_raw_device bond0

auto vmbr0
iface vmbr0 inet static
        address  10.0.1.3
        netmask  255.255.255.0
        gateway  10.0.1.1
        bridge_ports vlan10
        bridge_stp off
        bridge_fd 0

auto vmbr2
iface vmbr2 inet manual
        bridge_ports vlan2
        bridge_stp off
        bridge_fd 0


When the system boots, the bonded-interfaces and vlan's get the correct 9000 MTU set.

ifconfig bond0
bond0 Link encap:Ethernet HWaddr 90:e2:ba:00:c0:68
inet6 addr: fe80::92e2:baff:fe00:c068/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:9000 Metric:1
RX packets:49551029 errors:0 dropped:0 overruns:0 frame:0
TX packets:25607530 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:72285652635 (67.3 GiB) TX bytes:3394780364 (3.1 GiB)

ifconfig vlan10
vlan10 Link encap:Ethernet HWaddr 90:e2:ba:00:c0:68
inet6 addr: fe80::92e2:baff:fe00:c068/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:9000 Metric:1
RX packets:52921984 errors:0 dropped:0 overruns:0 frame:0
TX packets:26563297 errors:0 dropped:7 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:78008506003 (72.6 GiB) TX bytes:1809951476 (1.6 GiB)



But the vmbr0 does not. I cannot manually set jumbo frames with ifconfig vmbr0 mtu 9000, but I can set 1500 and lower MTU with the same command.

root@proxmox:~# ifconfig vmbr0 mtu 9000
SIOCSIFMTU: Invalid argument

but this works:
root@proxmox:~# ifconfig vmbr0 mtu 500

Is it not allowed to set > 1500 MTU on vmbr0? Or am I missing something obvious in my config?
 
Thanks for your reply kind Spirit!

You are correct in that the mtu 9000 is wanted for san storage.

I currently have 2 NIC's in the proxmox server configured as a port channel. I had planned to run both vm's and san traffic over the port-channel(on different vlans), for both redundancy and bursting above 1Gbit.

In my situation, would it be better to disband the port-channel and use "plain" conf like one NIC interconnected to san with mtu 9000, and next nic for vm's/mgmt?
Or is there any other solution that I havent thought of?
 
Hi, I personnaly never use same nics for san and vm traffic, but maybe this can work for you:

this exemple, 2 vlan (30 & 40),
- 1 bridge vmbr30 for vms on vlan30 with bonded interfaces (bond30)
- 1 bond40 with eth0.40 and eth1.40

Code:
auto eth0.30
iface eth0.30 inet manual
auto eth1.30
iface eth1.30 inet manual


auto bond30
iface bond30 inet manual
slaves eth0.30 eth1.30
bond_miimon 100
bond_mode active-backup
pre-up ifup eth0.30 eth1.30
post-down ifdown eth0.30 eth1.30


auto vmbr30
iface vmbr30 inet manual
        bridge_ports bond30
        bridge_stp off
        bridge_fd 0






auto eth0.40
iface eth0.40 inet manual
auto eth1.40
iface eth1.40 inet manual


auto bond40
iface bond40 inet manual
address  X.X.X.X
netmask  255.255.255.0
mtu 9000
slaves eth0.40 eth1.40
bond_miimon 100
bond_mode active-backup
pre-up ifup eth0.40 eth1.40
post-down ifdown eth0.40 eth1.40

Not sure it will work, cause I use multipath without vlan for my iscsi san, so I'm not sure mtu9000 works on bonded or tagged interfaces. (But it's better than putting it on a bridge)
 
I have found the actual apparent problem with this situation.
Specifically, this issue seems to stem from kvm itself pushing the MTU down when it creates it's venet's for the bridge. It's making the tapVMIDi# MTU's at 1500, then the vmbrX drops with it, causing the problem where the MTU is no longer up in the higher jumbo rates.

I was racking my brain over this for hours, until I finally saw this.

And the final problem is actually in Proxmox itself so far, in this script: /var/lib/qemu-server/pve-bridge

This script sets up the tap interface somewhat, but doesn't go and set the MTU to match the existing associated bridge it's connecting it to.

Eric Renfro
 
Last edited by a moderator:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!