LACP 802.3ad / Over than 1 server

timonych

Well-Known Member
Aug 8, 2017
63
17
48
34
Hello,

I have bought Netis ST3310GF and have successfully configured LACP on Mikrotik hEx and 1 Proxmox Server. It works perfectly. After I have bought another NIC with 2 ports and I have found that LACP not working for 2 Server.

1 Server NIC:
Intel Corporation 82575EB Gigabit Network Connection

2 Server NIC
Intel Corporation 82576 Gigabit Network Connection


After some time I have turned LACP on 1 Server. LACP starts work on 2 Server normally. I have found that there is Link Aggregation Group in Partner's Info has similliar value - 9. If i will turn two server simultaneously - one of them will have status of Aggragation - False.
2019-08-20 11_17_10-152. win8pers.png

I have tryed to find solution to force change this Group in Proxmox but no success.

Is it posssible?

P.S.
2 Server mode now - balance-xor that's why there is no info in Partner's view.
 
I do not understand your question?
From quick googling the Netis ST3310GF seems to be a small managed switch (no explicit mentioning of LACP, but Link Aggregation)?
The microtik is a 5-port router
* Where did you configure LACP ? (between router and switch, between switch and server)?
* The screenshot indicates that Aggregation works?!

* balance-xor is not LACP - but a independent bonding mode - LACP is 802.3ad - see https://www.kernel.org/doc/Documentation/networking/bonding.txt

please post some logs of the PVE-node and explain what issue you have and what you want to achieve

Thanks!
 
I do not understand your question?
From quick googling the Netis ST3310GF seems to be a small managed switch (no explicit mentioning of LACP, but Link Aggregation)?
In Netis ST3310GF LACP was added in last Firmware.

Снимок.PNG

I ahve configure LACP between RouterBOARD 750G r3 (MikroTik) and Netis ST3310GF - Have added 7 and 8 port in Trunk. (Trunk Group 3)
Also have configured LACP between Netis ST3310GF and my Server - Have added 3 and 4 port in Trunk. (Trunk Group 2)

I have tryed to configure LACP on 2 Server, but no success because group ID is similliar to 1 Server.

Снимок1.PNG

I need to configure it in Proxmox node, but didn't find where I could do it.

So my 2 Server works only in Balance XOR with Link Aggregation type. (Trunk Group 1)

Снимок2.PNG
 
because group ID is similliar to 1 Server.
no experience with NetIs switches - but it's new to me that you'd need to set an explicit group id on the server side for LACP...

please post the errors of the PVE-node you're seeing and your network configuration (/etc/network/interfaces)
You can see various bonding parameters in /sys/class/net/bondX (where bondX is the name of the bond-interface)


I would also ask NetIs support for the issue...

I hope this helps!
 
Bash:
auto lo
iface lo inet loopback

allow-hotplug enp31s0

auto enp32s0
iface enp32s0 inet manual

auto enp30s0f0
iface enp30s0f0 inet manual

auto enp30s0f1
iface enp30s0f1 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves enp30s0f0 enp30s0f1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto bond1
iface bond1 inet manual
        bond-slaves enp32s0
        bond-miimon 100
        bond-mode active-backup

auto vmbr0
iface vmbr0 inet static
        address  192.168.1.12
        netmask  24
        gateway  192.168.1.1
        bridge-ports bond0 bond1
        bridge-stp off
        bridge-fd 0

Bash:
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 00:1b:21:31:0c:ea
Active Aggregator Info:
        Aggregator ID: 1
        Number of ports: 2
        Actor Key: 9
        Partner Key: 1
        Partner Mac Address: 08:10:79:b3:ad:34

Slave Interface: enp30s0f0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:31:0c:ea
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: churned
Actor Churned Count: 0
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: 00:1b:21:31:0c:ea
    port key: 9
    port priority: 255
    port number: 1
    port state: 13
details partner lacp pdu:
    system priority: 1
    system mac address: 08:10:79:b3:ad:34
    oper key: 1
    port priority: 1
    port number: 1
    port state: 5

Slave Interface: enp30s0f1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:31:0c:eb
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: churned
Actor Churned Count: 0
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: 00:1b:21:31:0c:ea
    port key: 9
    port priority: 255
    port number: 2
    port state: 13
details partner lacp pdu:
    system priority: 1
    system mac address: 08:10:79:b3:ad:34
    oper key: 1
    port priority: 1
    port number: 2
    port state: 5

Bash:
[    7.561146] bonding: bond0 is being created...
[    7.934251] bond0: Enslaving enp30s0f0 as a backup interface with a down link
[    8.302225] bond0: Enslaving enp30s0f1 as a backup interface with a down link
[    8.523969] vmbr0: port 1(bond0) entered blocking state
[    8.523971] vmbr0: port 1(bond0) entered disabled state
[    8.524129] device bond0 entered promiscuous mode
[   10.288855] bond0: link status definitely up for interface enp30s0f0, 1000 Mbps full duplex
[   10.288859] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond
[   10.288865] bond0: first active interface up!
[   10.289162] vmbr0: port 1(bond0) entered blocking state
[   10.289164] vmbr0: port 1(bond0) entered forwarding state
[   10.704876] bond0: link status definitely up for interface enp30s0f1, 1000 Mbps full duplex
 

Attachments

  • 2019-08-29 12_24_03-ST3310GF.png
    2019-08-29 12_24_03-ST3310GF.png
    52.4 KB · Views: 26
  • 2019-08-29 12_25_32-ST3310GF.png
    2019-08-29 12_25_32-ST3310GF.png
    17.9 KB · Views: 23

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!