Unable to get port-channel up in beta3?

wetwilly

New Member
Apr 4, 2010
28
0
1
Hello.

Decided to try out the 2.0 beta 3 and encountered problems with getting port-channel, bond0 interface up correctly with my cisco 3560g switch.
I couldnt find any "known bugs" for the beta so Im wondering if anyone managed to get port-channel working correctly in beta?

I had port-channel up and running smoothly on my previous stable install (although with different NIC).

The issue I'm experiencing is that the interfaces goes up, but the port-channel stays down. No mac-adresses are visible on any interface on the switch.

Gi0/1 LACP_Proxmox connected trunk a-full a-1000 10/100/1000BaseTX
Gi0/2 LACP_Proxmox connected trunk a-full a-1000 10/100/1000BaseTX

Po1 LACP_Proxmox notconnect trunk a-full a-1000


This is my config

sh run int gi0/1
interface GigabitEthernet0/1
description LACP_Proxmox
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 2,10,100
switchport mode trunk
channel-group 1 mode desirable (also tried auto)
end

/etc/network/interfaces

auto eth1
iface eth1 inet manual

auto eth2
iface eth2 inet manual

auto bond0
iface bond0 inet manual
slaves eth1 eth2
bond_miimon 100
bond_mode 4


pveversion -v
pve-manager: 2.0-14 (pve-manager/2.0/6a150142)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 2.0-54
pve-kernel-2.6.32-6-pve: 2.6.32-54
lvm2: 2.02.86-1pve2
clvm: 2.02.86-1pve2
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-1
libqb: 0.6.0-1
redhat-cluster-pve: 3.1.7-1
pve-cluster: 1.0-12
qemu-server: 2.0-11
pve-firmware: 1.0-13
libpve-common-perl: 1.0-10
libpve-access-control: 1.0-3
libpve-storage-perl: 2.0-9
vncterm: 1.0-2
vzctl: 3.0.29-3pve7
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-1
ksm-control-daemon: 1.1-1

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 1
Actor Key: 17
Partner Key: 1
Partner Mac Address: 00:00:00:00:00:00

Slave Interface: eth1
MII Status: up
Link Failure Count: 1
Permanent HW addr: 90:e2:ba:xx
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 1
Permanent HW addr: 90:e2:ba:xx
Aggregator ID: 2
Slave queue ID: 0
 
Last edited:
Has noone tried port-channel on beta?

Right now I'm running each nic for itself and its working nice, but would like to get that port-channel up.

To anyone that got it to work, do you use channel-group mode desirable or mode auto?
 
Decided to give this another go and got the port-channel up on the cisco with Etherchannel only.

channel-group 1 mode on

On my earlier Intel NIC's I've been using mode active. Guess this is a feature in the latest Intel igb driver or maybe LACP á la cisco not supported at all.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!