Proxmox VE Bond with 802.1q

netweber

Renowned Member
Feb 9, 2016
10
0
66
49
I am having problems with a four port bond using LACP. The switch is configured for LACP and I can access the Proxmox IP on vmbr10 without issue. The problem is that the VM's have no network access if more than one of the four interfaces are plugged in. The server is a Dell R710 with four onboard Broadcom Gigabit NIC's.

My /etc/network/interfaces is as follows

auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

auto bond0
iface bond0 inet manual
slaves eth0 eth1 eth2 eth3
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

auto vmbr10
iface vmbr10 inet static
address 172.18.0.48
netmask 255.255.255.0
gateway 172.18.0.1
bridge_ports bond0.10
bridge_stp off
bridge_fd 0

auto vmbr13
iface vmbr13 inet manual
bridge_ports bond0.13
bridge_stp off
bridge_fd 0

auto vmbr9
iface vmbr9 inet manual
bridge_ports bond0.9
bridge_stp off
bridge_fd 0

auto vmbr14
iface vmbr14 inet manual
bridge_ports bond0.14
bridge_stp off
bridge_fd 0

auto vmbr100
iface vmbr100 inet manual
bridge_ports none
bridge_stp off
bridge_fd 0


Thanks,
Julian
 
What Switch do you have? Have you configured the switch already for the bond with lacpmode? Change your bond_xmit_hash_policy layer2 to bond_xmit_hash_policy layer2+3.
 
No, use layer3+4 instead for higher bandwidth. Using layer3+4 will distribute connections more even than the other xmit_hash policies.
Until now always used 2+3 with HP 2920 L3 Switch. So it is better to switch to 3+4 all the PVE Server to get more bandwidth too?
 
Thank you for the information. Test it next week, i'am curious about.
 
MIr & Fireon, thanks for the help with this. It appears I have been barking up the wrong tree. Since this system is remote, I have re-configured the network so that the Proxmox IP is assigned directly to the eth0 interface and I have put the bond on eth1-3. This allowed me to play with the link aggregation without loosing management to the proxmox server. After doing this and a little playing. I found that the VM's work properly with the Intel E1000 virtual nic instead of the VirtIO nic. I seem to remember reading in the past that the TCP Offload Engine on some cards can do inappropriate things when used with bonding. I believe the VirtIO Network interfaces use the accelerated TOE of the host card. If someone could shed some light on this I would love to have a better understanding of it. For the benefit of others, I am including the /etc/network/interfaces that I am using that is working with LACP. Also, this system is connect to an Avaya ERS 5520 and I am including the LACP portion of the config for that as well.

Julian


################# /etc/network/interfaces #######################
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 172.18.0.48
netmask 255.255.255.0
gateway 172.18.0.1

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

auto bond0
iface bond0 inet manual
slaves eth1 eth2 eth3
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer3+4

auto vmbr10
iface vmbr10 inet manual
bridge_ports bond0.10
bridge_stp off
bridge_fd 0

auto vmbr13
iface vmbr13 inet manual
bridge_ports bond0.13
bridge_stp off
bridge_fd 0

auto vmbr9
iface vmbr9 inet manual
bridge_ports eth0.9
bridge_stp off
bridge_fd 0

auto vmbr14
iface vmbr14 inet manual
bridge_ports bond0.14
bridge_stp off
bridge_fd 0

auto vmbr100
iface vmbr100 inet manual
bridge_ports none
bridge_stp off
bridge_fd 0
################### End /etc/network/interfaces #################


The proxmox server bond0 is connected to ports 37-40
################## Avaya Switch Config ##################
! *** VLAN ***
!
vlan create 9-10,13-14,100 type port
vlan name 9 "Internet"
vlan name 10 "LAN"
vlan name 13 "DMZ1"
vlan name 14 "Guest"
vlan name 100 "Other"
vlan ports 1-36 tagging unTagPvidOnly
vlan ports 37-40 tagging tagAll
vlan ports 41-48 tagging unTagPvidOnly
vlan configcontrol flexible
vlan members 1 NONE
vlan members 9-10,13-14,100 ALL
vlan ports 1-42 pvid 10
vlan ports 43-44 pvid 9
vlan ports 45-48 pvid 10
vlan configcontrol strict

! *** LACP ***
!
interface fastEthernet ALL
lacp key port 37-40 37
lacp timeout-time port 37-40 short
lacp mode port 37-40 active
lacp aggregation port 37-40 enable
exit
!
################## End Avaya Switch Config ##################
 
After doing this and a little playing. I found that the VM's work properly with the Intel E1000 virtual nic instead of the VirtIO nic. I seem to remember reading in the past that the TCP Offload Engine on some cards can do inappropriate things when used with bonding. I believe the VirtIO Network interfaces use the accelerated TOE of the host card.
Above only relates to the OS inside the VM. So what OS is used inside your VMs?
 
Above only relates to the OS inside the VM. So what OS is used inside your VMs?

I have two windows server 2012r2 VM's, one Free BSD VM, and one Linux VM. All are working fine using the Intel e1000 virtual NIC with three host interfaces in the LACP bond. These VM's cannot communicate with devices outside the host when using VirtIO NIC's and more than one host interface is active in the LACP bond. I can get some ICMP traffic through, but tcp traffic seems to fail.
 
These VM's cannot communicate with devices outside the host when using VirtIO NIC's and more than one host interface is active in the LACP bond. I can get some ICMP traffic through, but tcp traffic seems to fail.
This sounds more like a driver issue in the VMs and/or wrong configuration than anything else
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!