Migration from linux bridge to ovs - doesn't work

listhor

Member
Nov 14, 2023
33
1
8
Hi,
I would like to migrate two NICs - i226-V (facing LAN) to OVS mainly due to RSTP functionality.
Bridge in question is vmbr23 and is used for all VMs including, OPNsense VM as interface to LAN. After applying changes - I use this as delayed fallback:
Code:
ifreload -a; (sleep 120; cp /etc/network/interfaces.bak /etc/network/interfaces && ifreload -a)& (sleep 240; reboot now)&

I loose connectivity to this server (no oob) and the only way is to restore linux bridge config. What also bothers me is that above command doesn't work well every time I use it. Sometimes doesn't copy back good config and just restarts host and I need to ask for local assistance (I work on it remotely); is there anything better to use for this purpose?

So, what's wrong or what am I missing?

Current linux bridge config:
Code:
auto lo
iface lo inet loopback

iface enp2s0 inet manual
#lan

iface enp1s0 inet manual
#wan

iface enp3s0 inet manual
#iot

iface enp4s0 inet manual
#extra

auto vmbr23
iface vmbr23 inet manual
    bridge-ports enp2s0 enp3s0.12
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094
#LAN trunk i #3 IoT

auto vmbr4
iface vmbr4 inet static
    address 10.10.0.2/26
    bridge-ports enp4s0
    bridge-stp off
    bridge-fd 0
#extra

auto vmbr1
iface vmbr1 inet manual
    bridge-ports enp1s0
    bridge-stp off
    bridge-fd 0
#WAN

auto vmbr23.1
iface vmbr23.1 inet static
    address 172.16.0.11/24
    gateway 172.16.0.1
#Główny dostęp

iface vmbr23.1 inet6 static
    address 2001:XXXX
    gateway 2001:XXXX

auto vmbr23.11
iface vmbr23.11 inet static
    address 172.16.1.11/26
#storage

I changed bridge vmbr23 (and its interfaces) to OVS without any RSTP options yet:
Code:
auto lo
iface lo inet loopback

auto enp2s0
iface enp2s0 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr23
    ovs_options tag=1 vlan_mode=native-untagged
#lan

iface enp1s0 inet manual
#wan

iface enp4s0 inet manual
#extra

auto enp3s0
iface enp3s0 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr23
    ovs_options tag=12
#iot

auto vlan1
iface vlan1 inet static
    address 172.16.0.11/24
    gateway 172.16.0.1
    ovs_type OVSIntPort
    ovs_bridge vmbr23
    ovs_options tag=1
#main

auto vlan11
iface vlan11 inet static
    address 172.16.1.11/26
    ovs_type OVSIntPort
    ovs_bridge vmbr23
    ovs_options tag=11
#storage

auto vmbr4
iface vmbr4 inet static
    address 10.10.0.2/26
    bridge-ports enp4s0
    bridge-stp off
    bridge-fd 0
#extra

auto vmbr1
iface vmbr1 inet manual
    bridge-ports enp1s0
    bridge-stp off
    bridge-fd 0
#WAN

auto vmbr23
iface vmbr23 inet manual
    ovs_type OVSBridge
    ovs_ports enp2s0 enp3s0 vlan1 vlan11
#LAN trunk #3 IoT

Logs:
Code:
Apr 02 23:04:27 pvett ovs-vsctl[294882]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-br vmbr23
Apr 02 23:04:27 pvett kernel: ovs-system: entered promiscuous mode
Apr 02 23:04:27 pvett kernel: No such timeout policy "ovs_test_tp"
Apr 02 23:04:27 pvett kernel: Failed to associated timeout policy `ovs_test_tp'
Apr 02 23:04:27 pvett ovs-vsctl[294928]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-port vmbr23 enp2s0 -- --if-exists clear port enp2s0 bond_active_slave bond_mode cvlans external_ids lacp mac other_config qos tag trunks vlan_mode -- --if-exists clear interface enp2s0 mtu_request external-ids other_config options -- set Port enp2s0 tag=1 vlan_mode=native-untagged
Apr 02 23:04:27 pvett ovs-vsctl[294968]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-port vmbr23 enp3s0 -- --if-exists clear port enp3s0 bond_active_slave bond_mode cvlans external_ids lacp mac other_config qos tag trunks vlan_mode -- --if-exists clear interface enp3s0 mtu_request external-ids other_config options -- set Port enp3s0 tag=12
Apr 02 23:04:27 pvett ovs-vsctl[295007]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-port vmbr23 vlan1 -- --if-exists clear port vlan1 bond_active_slave bond_mode cvlans external_ids lacp mac other_config qos tag trunks vlan_mode -- --if-exists clear interface vlan1 mtu_request external-ids other_config options -- set Port vlan1 tag=1 -- set Interface vlan1 type=internal
Apr 02 23:04:27 pvett ovs-vsctl[295052]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-port vmbr23 vlan11 -- --if-exists clear port vlan11 bond_active_slave bond_mode cvlans external_ids lacp mac other_config qos tag trunks vlan_mode -- --if-exists clear interface vlan11 mtu_request external-ids other_config options -- set Port vlan11 tag=11 -- set Interface vlan11 type=internal

EDIT:
Could this have been related to settings of lan's NIC and its tagging for native vlan? As this interface is used by OPNsense which doesn't tag that particular network segment? But also none of other vlans are reachable...
Code:
auto enp2s0
iface enp2s0 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr23
    ovs_options tag=1 vlan_mode=native-untagged
#lan

EDIT 2:
Adding rollback script, maybe will be useful for somebody:
Code:
#!/bin/bash

# Define backup file location, new interfaces file, and log file
BACKUP_FILE="/etc/network/interfaces.backup"
NEW_INTERFACES_FILE="/etc/network/interfaces.new"
LOG_FILE="/var/log/ifreload.log"

# Backup the current interfaces file
cp /etc/network/interfaces $BACKUP_FILE

# Copy the new interfaces file over the existing one
cp $NEW_INTERFACES_FILE /etc/network/interfaces

# Apply the network changes and write output to the log file
ifreload -a &>> $LOG_FILE

# Set a timeout for user confirmation (in seconds)
TIMEOUT=30

# Function to rollback changes and schedule a cancelable reboot
rollback() {
    echo "Rolling back network changes..." &>> $LOG_FILE
    cp $BACKUP_FILE /etc/network/interfaces
    ifreload -a &>> $LOG_FILE
    echo "Network changes have been reverted." &>> $LOG_FILE
    # Schedule a reboot in 5 minutes
    echo "Scheduling a reboot in 2 minutes. Run 'shutdown -c' to cancel." &>> $LOG_FILE
    shutdown -r +2 &
    # Wait for user input to cancel the reboot
    read -t 60 -p "Press Enter within 1 minute to cancel the reboot: " cancel_reboot
    if [ $? -eq 0 ]; then
        # Cancel the scheduled reboot
        shutdown -c
        echo "Reboot canceled." &>> $LOG_FILE
    else
        echo "Reboot will proceed." &>> $LOG_FILE
    fi
}

# Ask for user confirmation to keep the changes
read -t $TIMEOUT -p "Press Enter within $TIMEOUT seconds to confirm the changes: " confirmation
# Check if the user has provided confirmation
if [ $? -ne 0 ]; then
    # No confirmation received, rollback and schedule a reboot
    rollback
else
    echo "Changes confirmed. No rollback needed." &>> $LOG_FILE
fi
 
Last edited:
Small update (if somebody reads it ;)).
When I test following config:
Code:
auto lo
iface lo inet loopback

auto enp2s0
iface enp2s0 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr23
    ovs_options other_config:rstp-path-cost=20000 other_config:rstp-port-admin-edge=false other_config:rstp-port-auto-edge=false other_config:rstp-enable=true other_config:rstp-port-mcheck=true vlan_mode=native-tagged tag=1
#lan

iface enp1s0 inet manual
#wan

iface enp4s0 inet manual
#extra

auto enp3s0
iface enp3s0 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr23
    ovs_options tag=12
#iot

auto vlan1
iface vlan1 inet static
    address 172.16.0.11/24
    gateway 172.16.0.1
    ovs_type OVSIntPort
    ovs_bridge vmbr23
    ovs_options tag=1
#main

auto vlan11
iface vlan11 inet static
    address 172.16.1.11/26
    ovs_type OVSIntPort
    ovs_bridge vmbr23
    ovs_options tag=11
#storage

auto vmbr4
iface vmbr4 inet static
    address 10.10.0.2/26
    bridge-ports enp4s0
    bridge-stp off
    bridge-fd 0
#extra

auto vmbr1
iface vmbr1 inet manual
    bridge-ports enp1s0
    bridge-stp off
    bridge-fd 0
#WAN

auto vmbr23
iface vmbr23 inet manual
    ovs_type OVSBridge
    ovs_ports enp2s0 enp3s0 vlan1 vlan11
    ovs_options other_config:rstp_enable=true other_config:rstp-priority=0 other_config:rstp-forward-delay=4 other_config:rstp-max-age=6
#LAN trunk i #3 dla IoT
using
Code:
ifreload -a -n -d
I receive 4 times following errors for vlan1 and vlan11 internal ports:
Code:
info: DRY-RUN: writing "" to file /sys/class/net/vlan1/ifalias
info: DRY-RUN: executing /sbin/sysctl net.mpls.conf.vlan1.input=0
info: DRY-RUN: executing /bin/ip -force -batch - [link set dev vlan1 down
link set dev vlan1 addrgenmode eui64
link set dev vlan1 up]
debug:   File "/usr/sbin/ifreload", line 135, in <module>
    sys.exit(main())
   File "/usr/sbin/ifreload", line 123, in main
    return stand_alone()
   File "/usr/sbin/ifreload", line 103, in stand_alone
    status = ifupdown2.main()
   File "/usr/share/ifupdown2/ifupdown/main.py", line 77, in main
    self.handlers.get(self.op)(self.args)
   File "/usr/share/ifupdown2/ifupdown/main.py", line 284, in run_reload
    ifupdown_handle.reload(['pre-up', 'up', 'post-up'],
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 2447, in reload
    self._reload_default(*args, **kargs)
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 2425, in _reload_default
    ret = self._sched_ifaces(new_filtered_ifacenames, upops,
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 1566, in _sched_ifaces
    ifaceScheduler.sched_ifaces(self, ifacenames, ops,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 595, in sched_ifaces
    cls.run_iface_list(ifupdownobj, run_queue, ops,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 325, in run_iface_list
    cls.run_iface_graph(ifupdownobj, ifacename, ops, parent,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 315, in run_iface_graph
    cls.run_iface_list_ops(ifupdownobj, ifaceobjs, ops)
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 188, in run_iface_list_ops
    cls.run_iface_op(ifupdownobj, ifaceobj, op,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 106, in run_iface_op
    m.run(ifaceobj, op,
   File "/usr/share/ifupdown2/addons/address.py", line 1606, in run
    op_handler(self, ifaceobj,
   File "/usr/share/ifupdown2/addons/address.py", line 1163, in _pre_up
    self.process_addresses(ifaceobj, ifaceobj_getfunc, force_reapply)
   File "/usr/share/ifupdown2/addons/address.py", line 683, in process_addresses
    self.__add_ip_addresses_with_attributes(ifaceobj, ifname, user_config_ip_addrs_list)
   File "/usr/share/ifupdown2/addons/address.py", line 592, in __add_ip_addresses_with_attributes
    self.log_error(str(e), ifaceobj, raise_error=False)
   File "/usr/share/ifupdown2/ifupdownaddons/modulebase.py", line 121, in log_error
    stack = traceback.format_stack()
debug: Traceback (most recent call last):
  File "/usr/share/ifupdown2/addons/address.py", line 590, in __add_ip_addresses_with_attributes
    self.netlink.addr_add(ifname, ip, nodad=nodad)
  File "/usr/share/ifupdown2/lib/dry_run.py", line 53, in __call__
    self.f(*(self.c(),) + arg, **kwargs)
TypeError: NetlinkListenerWithCache.addr_add_dry_run() got an unexpected keyword argument 'nodad'
error: NetlinkListenerWithCache.addr_add_dry_run() got an unexpected keyword argument 'nodad'
debug: vlan1: up : running module dhcp
debug: vlan1: up : running module address
info: DRY-RUN: executing /bin/ip route replace default via 172.16.0.1 proto kernel dev vlan1 onlink
debug: vlan1: up : running module addressvirtual
error: ifname vlan1 not present in cache
Code:
info: DRY-RUN: writing "" to file /sys/class/net/vlan11/ifalias
info: DRY-RUN: executing /sbin/sysctl net.mpls.conf.vlan11.input=0
info: DRY-RUN: executing /bin/ip -force -batch - [link set dev vlan11 down
link set dev vlan11 addrgenmode eui64
link set dev vlan11 up]
debug:   File "/usr/sbin/ifreload", line 135, in <module>
    sys.exit(main())
   File "/usr/sbin/ifreload", line 123, in main
    return stand_alone()
   File "/usr/sbin/ifreload", line 103, in stand_alone
    status = ifupdown2.main()
   File "/usr/share/ifupdown2/ifupdown/main.py", line 77, in main
    self.handlers.get(self.op)(self.args)
   File "/usr/share/ifupdown2/ifupdown/main.py", line 284, in run_reload
    ifupdown_handle.reload(['pre-up', 'up', 'post-up'],
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 2447, in reload
    self._reload_default(*args, **kargs)
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 2425, in _reload_default
    ret = self._sched_ifaces(new_filtered_ifacenames, upops,
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 1566, in _sched_ifaces
    ifaceScheduler.sched_ifaces(self, ifacenames, ops,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 595, in sched_ifaces
    cls.run_iface_list(ifupdownobj, run_queue, ops,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 325, in run_iface_list
    cls.run_iface_graph(ifupdownobj, ifacename, ops, parent,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 315, in run_iface_graph
    cls.run_iface_list_ops(ifupdownobj, ifaceobjs, ops)
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 188, in run_iface_list_ops
    cls.run_iface_op(ifupdownobj, ifaceobj, op,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 106, in run_iface_op
    m.run(ifaceobj, op,
   File "/usr/share/ifupdown2/addons/address.py", line 1606, in run
    op_handler(self, ifaceobj,
   File "/usr/share/ifupdown2/addons/address.py", line 1163, in _pre_up
    self.process_addresses(ifaceobj, ifaceobj_getfunc, force_reapply)
   File "/usr/share/ifupdown2/addons/address.py", line 683, in process_addresses
    self.__add_ip_addresses_with_attributes(ifaceobj, ifname, user_config_ip_addrs_list)
   File "/usr/share/ifupdown2/addons/address.py", line 592, in __add_ip_addresses_with_attributes
    self.log_error(str(e), ifaceobj, raise_error=False)
   File "/usr/share/ifupdown2/ifupdownaddons/modulebase.py", line 121, in log_error
    stack = traceback.format_stack()
debug: Traceback (most recent call last):
  File "/usr/share/ifupdown2/addons/address.py", line 590, in __add_ip_addresses_with_attributes
    self.netlink.addr_add(ifname, ip, nodad=nodad)
  File "/usr/share/ifupdown2/lib/dry_run.py", line 53, in __call__
    self.f(*(self.c(),) + arg, **kwargs)
TypeError: NetlinkListenerWithCache.addr_add_dry_run() got an unexpected keyword argument 'nodad'
error: NetlinkListenerWithCache.addr_add_dry_run() got an unexpected keyword argument 'nodad'
debug: vlan11: up : running module dhcp
debug: vlan11: up : running module address
debug: vlan11: up : running module addressvirtual
error: ifname vlan11 not present in cache
Code:
info: DRY-RUN: writing "" to file /sys/class/net/vlan1/ifalias
info: DRY-RUN: executing /sbin/sysctl net.mpls.conf.vlan1.input=0
info: DRY-RUN: executing /bin/ip -force -batch - [link set dev vlan1 down
link set dev vlan1 addrgenmode eui64
link set dev vlan1 up]
debug:   File "/usr/sbin/ifreload", line 135, in <module>
    sys.exit(main())
   File "/usr/sbin/ifreload", line 123, in main
    return stand_alone()
   File "/usr/sbin/ifreload", line 103, in stand_alone
    status = ifupdown2.main()
   File "/usr/share/ifupdown2/ifupdown/main.py", line 77, in main
    self.handlers.get(self.op)(self.args)
   File "/usr/share/ifupdown2/ifupdown/main.py", line 284, in run_reload
    ifupdown_handle.reload(['pre-up', 'up', 'post-up'],
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 2447, in reload
    self._reload_default(*args, **kargs)
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 2425, in _reload_default
    ret = self._sched_ifaces(new_filtered_ifacenames, upops,
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 1566, in _sched_ifaces
    ifaceScheduler.sched_ifaces(self, ifacenames, ops,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 595, in sched_ifaces
    cls.run_iface_list(ifupdownobj, run_queue, ops,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 325, in run_iface_list
    cls.run_iface_graph(ifupdownobj, ifacename, ops, parent,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 315, in run_iface_graph
    cls.run_iface_list_ops(ifupdownobj, ifaceobjs, ops)
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 188, in run_iface_list_ops
    cls.run_iface_op(ifupdownobj, ifaceobj, op,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 106, in run_iface_op
    m.run(ifaceobj, op,
   File "/usr/share/ifupdown2/addons/address.py", line 1606, in run
    op_handler(self, ifaceobj,
   File "/usr/share/ifupdown2/addons/address.py", line 1163, in _pre_up
    self.process_addresses(ifaceobj, ifaceobj_getfunc, force_reapply)
   File "/usr/share/ifupdown2/addons/address.py", line 683, in process_addresses
    self.__add_ip_addresses_with_attributes(ifaceobj, ifname, user_config_ip_addrs_list)
   File "/usr/share/ifupdown2/addons/address.py", line 592, in __add_ip_addresses_with_attributes
    self.log_error(str(e), ifaceobj, raise_error=False)
   File "/usr/share/ifupdown2/ifupdownaddons/modulebase.py", line 121, in log_error
    stack = traceback.format_stack()
debug: Traceback (most recent call last):
  File "/usr/share/ifupdown2/addons/address.py", line 590, in __add_ip_addresses_with_attributes
    self.netlink.addr_add(ifname, ip, nodad=nodad)
  File "/usr/share/ifupdown2/lib/dry_run.py", line 53, in __call__
    self.f(*(self.c(),) + arg, **kwargs)
TypeError: NetlinkListenerWithCache.addr_add_dry_run() got an unexpected keyword argument 'nodad'
error: NetlinkListenerWithCache.addr_add_dry_run() got an unexpected keyword argument 'nodad'
debug: vlan1: up : running module dhcp
debug: vlan1: up : running module address
info: DRY-RUN: executing /bin/ip route replace default via 172.16.0.1 proto kernel dev vlan1 onlink
debug: vlan1: up : running module addressvirtual
error: ifname vlan1 not present in cache
Code:
info: DRY-RUN: writing "" to file /sys/class/net/vlan1/ifalias
info: DRY-RUN: executing /sbin/sysctl net.mpls.conf.vlan1.input=0
info: DRY-RUN: executing /bin/ip -force -batch - [link set dev vlan1 down
link set dev vlan1 addrgenmode eui64
link set dev vlan1 up]
debug:   File "/usr/sbin/ifreload", line 135, in <module>
    sys.exit(main())
   File "/usr/sbin/ifreload", line 123, in main
    return stand_alone()
   File "/usr/sbin/ifreload", line 103, in stand_alone
    status = ifupdown2.main()
   File "/usr/share/ifupdown2/ifupdown/main.py", line 77, in main
    self.handlers.get(self.op)(self.args)
   File "/usr/share/ifupdown2/ifupdown/main.py", line 284, in run_reload
    ifupdown_handle.reload(['pre-up', 'up', 'post-up'],
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 2447, in reload
    self._reload_default(*args, **kargs)
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 2425, in _reload_default
    ret = self._sched_ifaces(new_filtered_ifacenames, upops,
   File "/usr/share/ifupdown2/ifupdown/ifupdownmain.py", line 1566, in _sched_ifaces
    ifaceScheduler.sched_ifaces(self, ifacenames, ops,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 595, in sched_ifaces
    cls.run_iface_list(ifupdownobj, run_queue, ops,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 325, in run_iface_list
    cls.run_iface_graph(ifupdownobj, ifacename, ops, parent,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 315, in run_iface_graph
    cls.run_iface_list_ops(ifupdownobj, ifaceobjs, ops)
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 188, in run_iface_list_ops
    cls.run_iface_op(ifupdownobj, ifaceobj, op,
   File "/usr/share/ifupdown2/ifupdown/scheduler.py", line 106, in run_iface_op
    m.run(ifaceobj, op,
   File "/usr/share/ifupdown2/addons/address.py", line 1606, in run
    op_handler(self, ifaceobj,
   File "/usr/share/ifupdown2/addons/address.py", line 1163, in _pre_up
    self.process_addresses(ifaceobj, ifaceobj_getfunc, force_reapply)
   File "/usr/share/ifupdown2/addons/address.py", line 683, in process_addresses
    self.__add_ip_addresses_with_attributes(ifaceobj, ifname, user_config_ip_addrs_list)
   File "/usr/share/ifupdown2/addons/address.py", line 592, in __add_ip_addresses_with_attributes
    self.log_error(str(e), ifaceobj, raise_error=False)
   File "/usr/share/ifupdown2/ifupdownaddons/modulebase.py", line 121, in log_error
    stack = traceback.format_stack()
debug: Traceback (most recent call last):
  File "/usr/share/ifupdown2/addons/address.py", line 590, in __add_ip_addresses_with_attributes
    self.netlink.addr_add(ifname, ip, nodad=nodad)
  File "/usr/share/ifupdown2/lib/dry_run.py", line 53, in __call__
    self.f(*(self.c(),) + arg, **kwargs)
TypeError: NetlinkListenerWithCache.addr_add_dry_run() got an unexpected keyword argument 'nodad'
error: NetlinkListenerWithCache.addr_add_dry_run() got an unexpected keyword argument 'nodad'
debug: vlan1: up : running module dhcp
debug: vlan1: up : running module address
info: DRY-RUN: executing /bin/ip route replace default via 172.16.0.1 proto kernel dev vlan1 onlink
debug: vlan1: up : running module addressvirtual
error: ifname vlan1 not present in cache
What is that "an unexpected keyword argument 'nodad'"?
 
I would like to migrate two NICs - i226-V (facing LAN) to OVS mainly due to RSTP functionality.
Bridge in question is vmbr23 and is used for all VMs including, OPNsense VM as interface to LAN.
Why do you need RSTP? If the other side is connected to an STP capable switch, it will handle all the STP, so you don't need to do STP on the Proxmox side ( switch will block the interface as long as the 2x interface is in the same bridge - loop is detected and it will disable 1x interface from 2 ).
If you want to use all the interfaces at the same time, then create "bond interface" with (algo: balance-rr), after add "bond interface" to bridge and VLANS.

I recommend, use VLANS inside in VM's interface not on the Proxmox host "inline bridge".

Moreover, using bridges, you just add the interfaces or subinterfaces to the bridge ( do not add to the bridge tags on the bridge - its possible but it will break everything, lead to unstable stage of network )

Code:
------- Example1-------
# Trunk interface
bond0
 > eth0
 > eth1

vmbr0
 > bond0 (bridge port: vmbr0)
-----------------------------

------- Example2 -------
# VLAN interface tag: 110
vmbr110
 > bond0.110 (bridge port: vmbr110)
-----------------------------

------- Example3 -------
# Trunk interface
vmbr0
 > eth0 (bridge port: vmbr0)
 > eth1 (bridge port: vmbr0)
-----------------------------

------- Example4 -------
# VLAN interface tag: 110
vmbr110
 > eth0.110 (bridge port: vmbr110)
 > eth1.110 (bridge port: vmbr110)
-----------------------------

------- Example5 -------
vmbr0.110
 > eth0
 > eth1
-----------------------------
In Example3 if you want VLAN 110 to reach then you need to add vlan interface inside the VM.
In Example4 if you want VLAN 110 to reach then you don't need vlan interface inside the VM ( directly connected to VLAN ).
In Example5 - totally wrong ( never use ).

The other info's are messing up, I don't understand what you want to achive.

Original question,
> What exactly do you want ? ( interface aggregation? )
> What VLANS do you need ?
 
Last edited:
The other info's are messing up, I don't understand what you want to achive.
So, my LAN looks as follow:
1700837837755.png
with addition of switch #2 between Synology (plus other devices) and main switch. Above diagram doesn't show real bridges numbers.
The reasons I want to try out OVS are:
  • After migrating from Esxi to PVE I started to have in/out errors in opnsense interfaces
  • switches I use are Unifi switches and their app doesn't show any hosts/VMs being run on PVE2
  • and to learn OVS
After migrating PVE host (not PVE2 - that's where I have an issue) to OVS:
Code:
auto lo
iface lo inet loopback

auto eno2
iface eno2 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr0
    ovs_options other_config:rstp-port-admin-edge=false other_config:rstp-port-mcheck=true other_config:rstp-path-cost=20000 tag=1 vlan_mode=native-tagged other_config:rstp-port-auto-edge=false other_config:rstp-enable=true
#Trunk - lan

auto eno1
iface eno1 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr1
#extra

auto eno3
iface eno3 inet manual
#Lagg1 - Truenas

auto eno4
iface eno4 inet manual
#Lagg2 - Truenas

auto vlan1
iface vlan1 inet static
    address 172.16.0.8/24
    gateway 172.16.0.1
    ovs_type OVSIntPort
    ovs_bridge vmbr0
    ovs_options tag=1
#LAN

auto vmbr0
iface vmbr0 inet manual
    ovs_type OVSBridge
    ovs_ports eno2 vlan1
    ovs_options other_config:rstp_enable=true other_config:rstp-priority=8192 other_config:rstp-forward-delay=4 other_config:rstp-max-age=6
#Trunk

auto vmbr1
iface vmbr1 inet static
    address 10.10.0.1/24
    ovs_type OVSBridge
    ovs_ports eno1
#direct access

auto vmbr10
iface vmbr10 inet static
    address 10.10.10.1/24
    ovs_type OVSBridge
#extra

auto vmbr2
iface vmbr2 inet static
    address 10.55.0.1/16
    ovs_type OVSBridge
    ovs_mtu 9000
#Storage Net
Errors count has dropped significantly plus traffic graphs in OPNsense are also greatly reduced.

PVE2 host main task is to run OPNsense VM plus Unifi controller in lxc.
I would like to set OVS in PVE2 with priority "0", downstream connected Unifi main switch has "4096" and second switch got "8192" as same as OVS in PVE (#1)

I must be doing some mistake (maybe RSTP blocks port?), but currently I'm away and don't want to lock myself out trying various options. Once I'm back, I'll connect directly to PVE2 and will see what blocks communication between LAN and PVE2 while in "OVS mode"...
 
So, my LAN looks as follow:
View attachment 66280
with addition of switch #2 between Synology (plus other devices) and main switch. Above diagram doesn't show real bridges numbers.
The reasons I want to try out OVS are:
  • After migrating from Esxi to PVE I started to have in/out errors in opnsense interfaces
  • switches I use are Unifi switches and their app doesn't show any hosts/VMs being run on PVE2
  • and to learn OVS
After migrating PVE host (not PVE2 - that's where I have an issue) to OVS:
Code:
auto lo
iface lo inet loopback

auto eno2
iface eno2 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr0
    ovs_options other_config:rstp-port-admin-edge=false other_config:rstp-port-mcheck=true other_config:rstp-path-cost=20000 tag=1 vlan_mode=native-tagged other_config:rstp-port-auto-edge=false other_config:rstp-enable=true
#Trunk - lan

auto eno1
iface eno1 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr1
#extra

auto eno3
iface eno3 inet manual
#Lagg1 - Truenas

auto eno4
iface eno4 inet manual
#Lagg2 - Truenas

auto vlan1
iface vlan1 inet static
    address 172.16.0.8/24
    gateway 172.16.0.1
    ovs_type OVSIntPort
    ovs_bridge vmbr0
    ovs_options tag=1
#LAN

auto vmbr0
iface vmbr0 inet manual
    ovs_type OVSBridge
    ovs_ports eno2 vlan1
    ovs_options other_config:rstp_enable=true other_config:rstp-priority=8192 other_config:rstp-forward-delay=4 other_config:rstp-max-age=6
#Trunk

auto vmbr1
iface vmbr1 inet static
    address 10.10.0.1/24
    ovs_type OVSBridge
    ovs_ports eno1
#direct access

auto vmbr10
iface vmbr10 inet static
    address 10.10.10.1/24
    ovs_type OVSBridge
#extra

auto vmbr2
iface vmbr2 inet static
    address 10.55.0.1/16
    ovs_type OVSBridge
    ovs_mtu 9000
#Storage Net
Errors count has dropped significantly plus traffic graphs in OPNsense are also greatly reduced.

PVE2 host main task is to run OPNsense VM plus Unifi controller in lxc.
I would like to set OVS in PVE2 with priority "0", downstream connected Unifi main switch has "4096" and second switch got "8192" as same as OVS in PVE (#1)

I must be doing some mistake (maybe RSTP blocks port?), but currently I'm away and don't want to lock myself out trying various options. Once I'm back, I'll connect directly to PVE2 and will see what blocks communication between LAN and PVE2 while in "OVS mode"...

The current topology you are facing the "flood" problem due connecting Access and Trunk to the same host causing the SWITCH dropping the packet on the trunk ports - You can fix it, on the SWITCH filtering all VLANS out except you using on the TRUNK link ( VMs).

The "standard" solution, use 1x logical link to servers, use bridges toward to the VMs.
( Just like when you connecting SWITCHES to SWITCHES - you dont link every VLAN using separate cable - You using 1x logical link ( TRUNK ).

EXAMPLE:
Screenshot from 2024-04-14 14-02-47.png


Code:
PVE-1 / PVE-2

$> apt-get install ifenslave


/etc/network/interfaces
########################################################################

auto eno1
iface eno1 inet manual
    bond-master bond0
    bond-mode balance-rr

auto eno2
iface eno2 inet manual
    bond-master bond0
    bond-mode balance-rr

auto eno3
iface eno3 inet manual
    bond-master bond0
    bond-mode balance-rr

auto eno4
iface eno4 inet manual
    bond-master bond0
    bond-mode balance-rr

########################################################################

auto bond0
iface bond0 inet manual
    bond-primary eno1
    bond-slaves eno1 eno2 eno3 eno4
    bond-mode balance-rr
    bond-miimon 100

auto bond0.10
iface bond0.10 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr3

auto bond0.99
iface bond0.99 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr0

auto bond0.100
iface bond0.100 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr1

########################################################################

auto vmbr0
iface vmbr0 inet manual
    ovs_type OVSBridge
    ovs_ports bond0.99
 
auto vmbr1
iface vmbr1 inet manual
    ovs_type OVSBridge
    ovs_ports bond0.100
 
auto vmbr2
iface vmbr2 inet manual
    ovs_type OVSBridge

auto vmbr3
iface vmbr3 inet manual
    ovs_type OVSBridge
    ovs_ports bond0.10
 
########################################################################
 
Last edited:
  • Like
Reactions: listhor
The current topology you are facing the "flood" problem due connecting Access and Trunk to the same host causing the SWITCH dropping the packet on the trunk ports - You can fix it, on the SWITCH filtering all VLANS out except you using on the TRUNK link ( VMs).

The "standard" solution, use 1x logical link to servers, use bridges toward to the VMs.
( Just like when you connecting SWITCHES to SWITCHES - you dont link every VLAN using separate cable - You using 1x logical link ( TRUNK ).

EXAMPLE:
View attachment 66303


Code:
PVE-1 / PVE-2

$> apt-get install ifenslave


/etc/network/interfaces
########################################################################

auto eno1
iface eno1 inet manual
    bond-master bond0
    bond-mode balance-rr

auto eno2
iface eno2 inet manual
    bond-master bond0
    bond-mode balance-rr

auto eno3
iface eno3 inet manual
    bond-master bond0
    bond-mode balance-rr

auto eno4
iface eno4 inet manual
    bond-master bond0
    bond-mode balance-rr

########################################################################

auto bond0
iface bond0 inet manual
    bond-primary eno1
    bond-slaves eno1 eno2 eno3 eno4
    bond-mode balance-rr
    bond-miimon 100

auto bond0.10
iface bond0.10 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr3

auto bond0.99
iface bond0.99 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr0

auto bond0.100
iface bond0.100 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr1

########################################################################

auto vmbr0
iface vmbr0 inet manual
    ovs_type OVSBridge
    ovs_ports bond0.99
 
auto vmbr1
iface vmbr1 inet manual
    ovs_type OVSBridge
    ovs_ports bond0.100
 
auto vmbr2
iface vmbr2 inet manual
    ovs_type OVSBridge

auto vmbr3
iface vmbr3 inet manual
    ovs_type OVSBridge
    ovs_ports bond0.10
 
########################################################################
Thanks very much! I'm away and I'll be back in a week - I'll let know how it works...
 
I have an issue with bonding and ovs. Once I set bonding, all works fine with exception of PVE management interface (vlan1 and vlan11). All addresses, either in default vlan (1) or outside of it are not accessible - while VMs are accessible in all vlans. Only management interface in separate bridge: vmbr1 works - it is without bonding... Both VMs and host use the same bridge: vmbr0. Here it is my config:
Code:
auto lo
iface lo inet loopback

auto enbak1
iface enbak1 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr1
#backup access

auto enlag3
iface enlag3 inet manual
#Lagg1 - PVE

auto enlag4
iface enlag4 inet manual
#Lagg2 - PVE

auto enlan2
iface enlan2 inet manual
#Lagg3 - PVE

auto vlan1
iface vlan1 inet static
    address 172.16.0.8/24
    gateway 172.16.0.1
    ovs_type OVSIntPort
    ovs_bridge vmbr0
    ovs_options tag=1
#LAN

auto vlan11
iface vlan11 inet static
    address 172.16.1.8/26
    ovs_type OVSIntPort
    ovs_bridge vmbr0
    ovs_options tag=11
#zasoby

auto bond0
iface bond0 inet manual
    ovs_bonds enlan2 enlag3 enlag4
    ovs_type OVSBond
    ovs_bridge vmbr0
    ovs_options bond_mode=balance-tcp other_config:rstp-path-cost=10000 tag=1 vlan_mode=native-tagged other_config:rstp-port-auto-edge=false lacp=active other_config:lacp-time=fast other_config:rstp-port-mcheck=true other_config:rstp-enable=true other_config:rstp-port-admin-edge=false
#LACP - PVE

auto vmbr0
iface vmbr0 inet manual
    ovs_type OVSBridge
    ovs_ports vlan1 bond0 vlan11
    up ovs-vsctl set Bridge ${IFACE} rstp_enable=true other_config:rstp-priority=8192 other_config:rstp-forward-delay=4 other_config:rstp-max-age=6
#Trunk - wszystkie sieci

auto vmbr1
iface vmbr1 inet static
    address 10.10.0.1/24
    ovs_type OVSBridge
    ovs_ports enbak1
#backup access

auto vmbr10
iface vmbr10 inet static
    address 10.10.10.1/24
    ovs_type OVSBridge
#Extra

auto vmbr2
iface vmbr2 inet static
    address 10.55.0.1/16
    ovs_type OVSBridge
    ovs_mtu 9000
#Storage Net

If I remove all rstp options, result is still the same.
Temporarily I came back to linux bridges and bonding and all is fine. So, why OVS doesn't work well with bond interface?

EDIT:
VMs' interfaces are tagged by PVE.
I did try to either tag or untag PVE interfaces (both settings of vlan mode as well) and bond link but it doesn't help...
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!