[SOLVED] LACP net configuration for VM network

simon_lefisch

Member
Jun 20, 2022
25
1
8
Hello everyone,

I currently have a failover bond configured for the management interface of my Proxmox instance. Now I am trying to create an LACP interface for the VM network but am having some issues when doing so.

Any time I try to configure LACP, I lose connection to the management interface. I believe I have the configuration on my Netgear GS728tpv2 switch set correctly, so hopefully someone can help me out and point me in the right direction. Below is my Proxmox configuration.

Code:
# Part of LACP interface for VM Network
iface eth0 inet manual

# Part of LACP interface for VM Network
iface eth1 inet manual

# This is the interface that will be configured as LACP for the new VM network
auto bond1
iface bond1 inet manual
    bond-slaves eth0 eth1
    bond-miimon 100
    bond-mode 802.3ad
    bond-emit-hash-policy layer2
    mtu 900

# This will be the bridge interface for the VM network using bond1
auto vmbr1
iface vmbr1 net static
    address 192.168.1.40/24
    gateway 192.168.1.1
    bridge-ports bond1
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vlans 2-4094

For my switch, I have added ports 13 & 14 together to make them a LAG interface and changed the type to LACP. I then added that LAG interface to VLAN 1 on the switch, along with making sure the PVID is set to 1 as well. When implementing the configuration above for the VM network in Proxmox, I lose access to the management interface.

If anyone has any ideas on what I'm missing, I would be greatly appreciated. TIA.
 
I think you have a typo in MTU line (missing 0), but that doesnt mean its the underlying issue.
Also, its not bond-emit-hash-policy its bond-xmit-hash-policy. Again, I dont know if thats the root cause.

Since you did not provide you mgmt interface configuration, there is a chance that you have a conflict between the two.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thanks for the reply @bbgeek17. I'll change the entry you pointed out and give it a try in a bit.

Below is my management interface configuration.

Code:
# Part of failover interface bond0 below
auto eno1
iface eno1 inet manual

# Part of failover interface bond0 below
auto eno2
iface eno2 inet manual

# Failover for Proxmox management interface
auto bond0
iface bond0 inet static
        address 192.168.1.20/24
        gateway 192.168.1.1
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode active-backup
        bond-primary eno1
        bond-miimom 100
 
You have placed both interfaces on the same network, creating a conflicting routing, exiting points.
The easiest way is to remove IP settings, as @floh8 pointed out.

If you really need the IP on the VM interface (perhaps for storage access, or something else) - you should use different network address, ie 192.168.2.x/24

Good luck



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Wouldn't I need an IP for bond1 in order for traffic to flow for the VM network tho?
No, the VMs have their own interfaces/IPs that are advertised on that bridge. You don't need the underlying bridge to have an IP for the VM connectivity.

There are more complex configurations where you may need an IP, ie NAT. However, you are likely not using one now.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
No, the VMs have their own interfaces/IPs that are advertised on that bridge. You don't need the underlying bridge to have an IP for the VM connectivity.

There are more complex configurations where you may need an IP, ie NAT. However, you are likely not using one now.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Then how would my VMs know to use bond1 if it's not listed as an option for their NIC?

1726069521843.png
 
I only commented vmbr1 out in the config (see below).

Code:
root@pve:~# cat /etc/network/interfaces

auto lo
iface lo inet loopback

# Part of failover interface bond0 below
auto eno1
iface eno1 inet manual

# Part of failover interface bond0 below
auto eno2
iface eno2 inet manual

iface eth0 inet manual

iface eth1 inet manual

# Failover for Proxmox management interface
auto bond0
iface bond0 inet static
        address 192.168.1.20/24
        gateway 192.168.1.1
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode active-backup
        bond-primary eno1
        bond-miimom 100

# This is the current VM network interface
auto vmbr0
iface vmbr0 inet static
        address 192.168.1.30/24
        bridge-ports eth1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

# This is the interface that will be configured as LACP for the new VM network
auto bond1
iface bond1 inet manual
        bond-slaves eth0 eth1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2
        mtu 900

# This will be the bridge interface for the VM network using bond1
#auto vmbr1
#iface vmbr1 net static
#       address 192.168.1.40/24
#       gateway 192.168.1.1
#       bridge-ports bond1
#       bridge-stp off
#       bridge-fd 0
#       bridge-vlan-aware yes
#       bridge-vlans 2-4094

Code:
root@pve:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:25:90:c8:3d:1c brd ff:ff:ff:ff:ff:ff
    altname enp3s0f0
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 00:25:90:c8:3d:1d brd ff:ff:ff:ff:ff:ff
    altname enp3s0f1
    inet6 fe80::225:90ff:fec8:3d1d/64 scope link
       valid_lft forever preferred_lft forever
4: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether ac:1f:6b:45:95:a2 brd ff:ff:ff:ff:ff:ff
    altname enp9s0f0
5: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether ac:1f:6b:45:95:a2 brd ff:ff:ff:ff:ff:ff permaddr ac:1f:6b:45:95:a3
    altname enp9s0f1
76: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:45:95:a2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.20/24 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fd9f:6340:982e:6c4f:ae1f:6bff:fe45:95a2/64 scope global dynamic mngtmpaddr
       valid_lft 1707sec preferred_lft 1707sec
    inet6 fe80::ae1f:6bff:fe45:95a2/64 scope link
       valid_lft forever preferred_lft forever
77: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:25:90:c8:3d:1d brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.30/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::225:90ff:fec8:3d1d/64 scope link
       valid_lft forever preferred_lft forever
78: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 900 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:c4:7a:55:a3:42 brd ff:ff:ff:ff:ff:ff
 
@bbgeek17 still can see vmbr1 in VM NIC option. Outputs below. I tried setting vmbr1 to static and manual, same issue.

Code:
root@-pve:~# cat /etc/network/interfaces

auto lo
iface lo inet loopback

# Part of failover interface bond0 below
auto eno1
iface eno1 inet manual

# Part of failover interface bond0 below
auto eno2
iface eno2 inet manual

# Part of VM network interface bond1
iface eth0 inet manual

# Part of VM network interface bond1
iface eth1 inet manual

# Failover for Proxmox management interface
auto bond0
iface bond0 inet static
        address 192.168.1.20/24
        gateway 192.168.1.1
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode active-backup
        bond-primary eno1
        bond-miimom 100

# This is the current VM network interface
auto vmbr0
iface vmbr0 inet static
        address 192.168.1.30/24
        bridge-ports eth1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

# This is the VM network interface that is configured as LACP
auto bond1
iface bond1 inet manual
        bond-slaves eth0 eth1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2
        mtu 900

# This will be the bridge interface for the VM network using bond1
auto vmbr1
iface vmbr1 net manual
#       address 192.168.63.40/24
#       gateway 192.168.63.1
        bridge-ports bond1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vlans 2-4094

Code:
root@id3-pve:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether ac:1f:6b:45:95:a2 brd ff:ff:ff:ff:ff:ff
    altname enp9s0f0
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether ac:1f:6b:45:95:a2 brd ff:ff:ff:ff:ff:ff permaddr ac:1f:6b:45:95:a3
    altname enp9s0f1
4: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 900 qdisc mq master bond1 state UP group default qlen 1000
    link/ether 0c:c4:7a:55:a3:42 brd ff:ff:ff:ff:ff:ff
5: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 900 qdisc mq master bond1 state UP group default qlen 1000
    link/ether 0c:c4:7a:55:a3:42 brd ff:ff:ff:ff:ff:ff permaddr 0c:c4:7a:55:a3:43
83: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ac:1f:6b:45:95:a2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.63.20/24 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fd9f:6340:982e:6c4f:ae1f:6bff:fe45:95a2/64 scope global dynamic mngtmpaddr
       valid_lft 1733sec preferred_lft 1733sec
    inet6 fe80::ae1f:6bff:fe45:95a2/64 scope link
       valid_lft forever preferred_lft forever
84: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:25:90:c8:3d:1d brd ff:ff:ff:ff:ff:ff
    inet 192.168.63.30/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::225:90ff:fec8:3d1d/64 scope link
       valid_lft forever preferred_lft forever
85: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 900 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether 0c:c4:7a:55:a3:42 brd ff:ff:ff:ff:ff:ff
86: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:c4:7a:55:a3:42 brd ff:ff:ff:ff:ff:ff
    inet6 fd9f:6340:982e:6c4f:ec4:7aff:fe55:a342/64 scope global dynamic mngtmpaddr
       valid_lft 1733sec preferred_lft 1733sec
    inet6 fe80::ec4:7aff:fe55:a342/64 scope link
       valid_lft forever preferred_lft forever
 
@bbgeek17 still can see vmbr1 in VM NIC option. Outputs below. I tried setting vmbr1 to static and manual, same issue.
I think you mispoke here? Perhaps give it a reboot
I just put a fake bridge on a downed interface and it showed up in VM config.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I think you mispoke here? Perhaps give it a reboot
I just put a fake bridge on a downed interface and it showed up in VM config.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Correct, I meant to say I still CAN'T see vmbr1 in the VM NIC options.

It does show in the network configuration section of the node, but it says the type is unknown.

1726073917678.png

I'll reboot the node and see what happens.
 
Ok I got it working.

I removed the vmbr1 config from the shell and re-added it thru Proxmox and was able to get it working. I compared the configuration of it to what I had when I created it in the shell and it is the exact same. So not sure why it wasn't working when I made changes via the shell (restarted networking service when made changes).

Thanks for all the help guys, it's much appreciated.
 
if you need 2 gateway i recommend use openvswitch

This is my config Using Production
Code:
    root@pox43:~# cat /etc/network/interfaces

    auto lo

    iface lo inet loopback

    auto ens18

    iface ens18 inet manual

            ovs_mtu 9000

    auto ens19

    iface ens19 inet manual

            ovs_mtu 9000

    auto ens20

    iface ens20 inet manual

            ovs_mtu 9000

    auto ens21

    iface ens21 inet manual

            ovs_mtu 9000

    auto ens22

    iface ens22 inet manual

            ovs_mtu 9000

    auto ens23

    iface ens23 inet manual

            ovs_mtu 9000

    # End Interface Cofig

    

    auto bond0

    iface bond0 inet manual

            ovs_bridge vmbr0

            ovs_type OVSBond

            ovs_bonds ens18 ens19

            ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast

            ovs_mtu 9000

    

    auto vmbr0

    iface vmbr0 inet manual

            ovs_type OVSBridge

            ovs_ports bond0 vlan29

            ovs_mtu 9000

    # Proxmox MGMT Cluster Communication vlan

    auto bond1

    iface bond1 inet manual

            ovs_bridge vmbr1

            ovs_type OVSBond

            ovs_bonds ens20 ens21

            ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast

            ovs_mtu 9000

    

    auto vmbr1

    iface vmbr1 inet manual

            ovs_type OVSBridge

            ovs_ports bond1 vlan31

            ovs_mtu 9000

    # Proxmox Ceph Cluster Communication vlan

    auto bond2

    iface bond2 inet manual

            ovs_bridge vmbr2

            ovs_type OVSBond

            ovs_bonds ens22 ens23

            ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast

            ovs_mtu 9000

    

    auto vmbr2

    iface vmbr2 inet manual

            ovs_type OVSBridge

            ovs_ports bond2

            ovs_mtu 9000

    # Proxmox Service on VM

    

    auto vlan29

    iface vlan29 inet static

            ovs_type OVSIntPort

            ovs_bridge vmbr0

            ovs_options tag=29

            address 192.168.29.43

            netmask 255.255.255.0

            gateway 192.168.29.254

            ovs_mtu 9000

    # Cluster vlan

    

    

    auto vlan31

    iface vlan31 inet static

            ovs_type OVSIntPort

            ovs_bridge vmbr1

            ovs_options tag=31

            address 192.168.31.43

            netmask 255.255.255.0

            gateway 192.168.31.254

            ovs_mtu 9000

    # Ceph vlan

    

    

    

    

    source /etc/network/interfaces.d/*

    root@pox43:~#
 
Last edited:
Thanks for the reply @Pasit.

My setup isn that complicated. I was able to get OVS setup in LACP mode for my VM network, which is all I need it for. My Proxmox management interface have their own physical NICs (as does my VM network), so having them bonded as thy are now with a failover is perfect for my needs.

I appreciate your info tho! The more info I have, the better, as I am still getting used to Proxmox (had VMs on my CentOS 7 box previously).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!