Dell T620 onboard NIC bond0 requires manual ifenslave to work?

Oct 21, 2009
46
1
73
Australia
Hi list, I've recently bought a Dell T620 server that has 2x intel onboard NICs and a 4 port broadcom add-on card. I want to 802.3ad bond the onboard NICs - eth4,eth5. The switch is setup for LACP.

I initially setup the bond via the web interface and edited vmbr0 to add my bond0 to that bridge as below:
Code:
# network interface settings
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface eh4 inet manual
iface eth5 inet manual

auto bond0
iface bond0 inet manual
        slaves eth4,eth5
        bond_miimon 100
        bond_mode 802.3ad

auto vmbr0
iface vmbr0 inet static
        address  192.168.0.x
        netmask  255.255.255.0
        gateway  192.168.0.x
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
However this configuration doesn't allow me to get connectivity. I get error messages whenever I attempt to start the interface:
Code:
# service networking restart
Reconfiguring network interfaces...device bond0 is not a slave of vmbr0
Failed to enslave eth4,eth5 to bond0.  Is bond0 ready and a bonding interface? cant' add bond0 to bridge vmbr0: invalid argument
As far as I can tell perhaps because I have a lot of interfaces the networking devices are not being parsed in the right order? If I run the following commands manually it brings up the bond successfully and runs fine:
Code:
# ip link set dev bond0 up
# ifenslave bond0 eth4 eth5
# ifdown vmbr0 && ifup vmbr0
# ping 192.168.x.x
PING 192.168.x.x (192.168.x.x) 56(84) bytes of data.
64 bytes from 192.168.x.x: icmp_req=1 ttl=64 time=286ms
64 bytes from 192.168.x.x: icmp_req=1 ttl=64 time=0.243ms
64 bytes from 192.168.x.x: icmp_req=1 ttl=64 time=0.188ms
64 bytes from 192.168.x.x: icmp_req=1 ttl=64 time=0.223ms
64 bytes from 192.168.x.x: icmp_req=1 ttl=64 time=0.198ms
So I have had to edit the interfaces manually as follows to allow this to work. With this edit in place everything seems ok.
Code:
# network interface settings
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface eth4 inet manual
iface eth5 inet manual

auto bond0
iface bond0 inet manual
        slaves eth4,eth5
        bond_miimon 100
        bond_mode 802.3ad

auto vmbr0
iface vmbr0 inet static
        address  192.168.0.x
        netmask  255.255.255.0
        gateway  192.168.0.x
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
        pre-up ip link set dev bond0 up && ifenslave bond0 eth4 eth5   <========added to get working
What I'd like to know from the forums is that is this a reasonable fix? Is there something else I should be looking at to get the default interfaces file as generated by proxmox working?

pveversion is:
Code:
# pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-71
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1
Thanks in advance.
 
The configuration looks correct. I guess everything works if you boot with that configuration?

Note: Sometimes 'service networking restarts' fails because of a changed config.
 
The above solution did not work out for 3 of my servers, all of them Dell hardware. What actually did work is below:

Code:
auto lo
iface lo inet loopback


auto eth0
iface eth0 inet manual
    bond-master bond0


auto eth1
iface eth1 inet manual
    bond-master bond0


auto bond0
iface bond0 inet manual
    slaves none
    bond_miimon 100
    bond_mode balance-rr


auto vmbr0
iface vmbr0 inet static
    address  192.168.0.2
    netmask  255.255.255.0
    gateway  192.168.0.254
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0
 
The above solution did not work out for 3 of my servers, all of them Dell hardware. What actually did work is below:
Code:
auto loiface lo inet loopbackauto eth0iface eth0 inet manual    bond-master bond0auto eth1iface eth1 inet manual    bond-master bond0auto bond0iface bond0 inet manual    slaves none    bond_miimon 100    bond_mode balance-rrauto vmbr0iface vmbr0 inet static    address  192.168.0.2    netmask  255.255.255.0    gateway  192.168.0.254    bridge_ports bond0    bridge_stp off    bridge_fd 0
Nice shot ddrazyk ... was stuck for 2 days with exactly the same problem on a R620, your last conf works perfectly for me as well :)
 
I had the same problem recently and was looking in many directions...drivers etc., but it is only syntax:
I also used the wrong comma syntax on one machine:
slaves eth1, eth7 -> not working

then I saw on the other machine, where it worked without comma:
slaves eth1 eth7 -> working
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!