[SOLVED] ifupdown2 and bond

RobFantini

Famous Member
May 24, 2012
2,022
107
133
Boston,Mass
Hello,
at our pbs system a warning had flashed about ifupdown2 or ifdown2 missing, so i installed it. after doing so my existing bond did not work so had to change /etc/network/interfaces to new bond directives.

I have 5 pve nodes to get bond working on.

My question: is ifupdown2 going to be used by default in the future? If so I'll just install now and make /etc/network/interfaces bonds work with it.
 
ifupdown2 should be installed and used by default on PBS. For PVE there are no plans to make it the default in 6.X, so for now it will be an optional package.
 
Code:
auto bond0
iface bond0 inet static
        address 10.11.12.80/24
        bond-mode active-backup
        bond-primary enp3s0f0
        bond-slaves enp3s0f0 enp3s0f1
        mtu 9000
        

## OLD
auto bond0
iface bond0 inet static
        address 10.11.12.80/24
        slaves enp3s0f0 enp3s0f1
        bond_miimon 100
        bond_mode active-backup
        mtu 9000
note the '_' changed to '-' for bond_mode

and bond-primary had to be set.

I just did what was needed to get the nic working, not sure if those are ideal settings.
 
Thanks.

it's related to "slaves " vs "bond-slaves". I have send a patch recently to update the doc,

because "slaves " is deprecated in debian/ifupdown1 since 2013, and ifupdown2 don't support it.

(If you do config though the gui, it should already use bond-slaves)

(about the "_". It's working in both case, but it's better to use "-")
 
so after looking at the network expertise the authors of ifupdown2 have at networking , i think we'll switch our pve cluster to use ifupdown2 .
 
Hello Spirit. during the switchover to ifupdown2 I noticed a few warnings which you are probably already aware of. I assume these will not cause an issue. However there could be settings to avoid the warnings?

Code:
# ifreload -a
warning: bond0: attribute bond-min-links is set to '0'


and from dmesg:
Code:
# 1
[Sun Nov 15 06:17:30 2020] vmbr0: the hash_elasticity option has been deprecated and is always 16


#  2
# these apparmor messages,   they may indicate trying to look up data was  denied ?
#
[Sun Nov 15 06:13:47 2020] bond2: (slave enp3s0f1): Enslaving as a backup interface with an up link
[Sun Nov 15 06:13:47 2020] audit: type=1400 audit(1605438829.114:192): apparmor="DENIED" operation="open" profile="/usr/sbin/sssd" name="/sys/devices/pci0000:00/0000:00:02.2/0000:03:00.0/net/enp3s0f0/type" pid=2331 comm="sssd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
[Sun Nov 15 06:13:47 2020] audit: type=1400 audit(1605438829.114:193): apparmor="DENIED" operation="open" profile="/usr/sbin/sssd" name="/sys/devices/pci0000:00/0000:00:02.2/0000:03:00.1/net/enp3s0f1/type" pid=2331 comm="sssd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
[Sun Nov 15 06:13:49 2020] audit: type=1400 audit(1605438830.378:194): apparmor="DENIED" operation="open" profile="/usr/sbin/sssd" name="/sys/devices/pci0000:00/0000:00:02.2/0000:03:00.0/net/enp3s0f0/type" pid=2331 comm="sssd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0

interfaces
Code:
auto bond0
iface bond0 inet manual
        bond-slaves enp2s0f0 enp2s0f1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address 10.1.10.14/24
        gateway 10.1.10.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0

note that apt install ifupdown2 create a interfaces.new . I moved that to interfaces and ran ifreload -a .
 
Last edited:
Code:
However there could be settings to avoid the warnings?

Code:
# ifreload -a
warning: bond0: attribute bond-min-links is set to '0'

I need to patch this. It's a default for physical cumulus switch. You can ignore it
https://github.com/CumulusNetworks/ifupdown2/issues/131

(0 = keep bond interface up is all physical links are down)

Code:
# 1
[Sun Nov 15 06:17:30 2020] vmbr0: the hash_elasticity option has been deprecated and is always 16

it's for compatibility with older kernel (the value is hardcoded in recent kernel)
https://www.spinics.net/lists/linux-ethernet-bridging/msg07636.html


Code:
#  2
# these apparmor messages,   they may indicate trying to look up data was  denied ?
#
[Sun Nov 15 06:13:47 2020] bond2: (slave enp3s0f1): Enslaving as a backup interface with an up link
[Sun Nov 15 06:13:47 2020] audit: type=1400 audit(1605438829.114:192): apparmor="DENIED" operation="open" profile="/usr/sbin/sssd" name="/sys/devices/pci0000:00/0000:00:02.2/0000:03:00.0/net/enp3s0f0/type" pid=2331 comm="sssd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
[Sun Nov 15 06:13:47 2020] audit: type=1400 audit(1605438829.114:193): apparmor="DENIED" operation="open" profile="/usr/sbin/sssd" name="/sys/devices/pci0000:00/0000:00:02.2/0000:03:00.1/net/enp3s0f1/type" pid=2331 comm="sssd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
[Sun Nov 15 06:13:49 2020] audit: type=1400 audit(1605438830.378:194): apparmor="DENIED" operation="open" profile="/usr/sbin/sssd" name="/sys/devices/pci0000:00/0000:00:02.2/0000:03:00.0/net/enp3s0f0/type" pid=2331 comm="sssd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
mmmm.this is strange..."sssd" don't seem related to ifupdown2. (it's for active directory/kerberos authentification, right ?)
Do you have some scripts in /etc/network/if-up.d/ or /etc/network/if-pre-up.d/ with sssd command inside ?
Anyway, it shouldn't be a problem for ifupdown2
 
  • Like
Reactions: RobFantini
off topic: we are looking at upgrading ceph switches. currently using Quanta LB6M 10Gbe. we have 40Gbe cards . we think Mellanox/Nvidia switches running cumulus linux is the way to go.

however i know little on this subject. is cumulus a good fit in labs and clusters?
 
off topic: we are looking at upgrading ceph switches. currently using Quanta LB6M 10Gbe. we have 40Gbe cards . we think Mellanox/Nvidia switches running cumulus linux is the way to go.

however i know little on this subject. is cumulus a good fit in labs and clusters?
I'm running mellanox s21XX, s37XX in production , they are running fine. But I don't use cumulus currently. (I'm using native onyx os).
I have some other switch with cumulus linux, it's basically a debian with ifupdown + frr for routing. It's running fine, upgraded are well tested.

(I don't use cumulus on mellanox, because currently, I'm not doing routing, evpn, or advanced features. I'm only do simple mlag with lacp. didn't want to pay for extra license cost).
 
  • Like
Reactions: RobFantini

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!