VE 7.0-11: OVS Bond/Bridge/Ports disabled/failing after server reboot

formatter

Active Member
Apr 3, 2017
19
1
43
53
Hello !

After a server/host reboot OVS bond, bridge and ports are in disabled state and can't be enabled.

Here some info

PVE.png


Code:
root@pve:~# apt install openvswitch-switch
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
openvswitch-switch is already the newest version (2.15.0+ds1-2).
The following packages were automatically installed and are no longer required:
  bsdmainutils libatomic1 libdouble-conversion1 libllvm7
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@pve:~# dpkg -l openvswitch-*
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                       Version      Architecture Description
+++-==========================-============-============-===================================
ii  openvswitch-common         2.15.0+ds1-2 amd64        Open vSwitch common components
ii  openvswitch-switch         2.15.0+ds1-2 amd64        Open vSwitch switch implementations
un  openvswitch-test           <none>       <none>       (no description available)
un  openvswitch-testcontroller <none>       <none>       (no description available)
un  openvswitch-vtep           <none>       <none>       (no description available)
root@pve:~#


Code:
root@pve:/etc/network# more interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto enp1s0f0
iface enp1s0f0 inet manual

auto enp1s0f1
iface enp1s0f1 inet manual

allow-vmbr1 vlan010_internal
iface vlan010_internal inet static
        address 192.168.10.0/24
        ovs_type OVSIntPort
        ovs_bridge vmbr1
        ovs_options tag=10

allow-vmbr1 vlan050_home
iface vlan050_home inet static
        address 192.168.50.0/24
        ovs_type OVSIntPort
        ovs_bridge vmbr1
        ovs_options tag=50

allow-vmbr1 vlan060_development
iface vlan060_development inet static
        address 192.168.60.0/28
        ovs_type OVSIntPort
        ovs_bridge vmbr1
        ovs_options tag=60

allow-vmbr1 bond0
iface bond0 inet manual
        ovs_bonds enp1s0f0 enp1s0f1
        ovs_type OVSBond
        ovs_bridge vmbr1
        ovs_options lacp=active bond_mode=balance-tcp other_config:lacp-time=fast
       
auto vmbr0
iface vmbr0 inet static
        address 192.168.0.50/24
        gateway 192.168.0.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

allow-ovs vmbr1
iface vmbr1 inet manual
        ovs_type OVSBridge
        ovs_ports bond0 vlan010_internal vlan050_home vlan060_development

root@pve:/etc/network#


Can anyone help ?

Many thanks in advance !
 
Some additional input (as being requested in some other network issue thread):

Code:
root@pve:~# ip a && ip r
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:1b:21:31:13:70 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21b:21ff:fe31:1370/64 scope link
       valid_lft forever preferred_lft forever
3: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:1b:21:31:13:71 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21b:21ff:fe31:1371/64 scope link
       valid_lft forever preferred_lft forever
4: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether a8:a1:59:16:98:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:e3:f1:ed:ed:c2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.50/24 brd 192.168.0.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::ce3:f1ff:feed:edc2/64 scope link
       valid_lft forever preferred_lft forever
9: veth10000i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr10000i0 state UP group default qlen 1000
    link/ether fe:10:8d:19:ac:76 brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: fwbr10000i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9e:2b:28:c6:e0:1d brd ff:ff:ff:ff:ff:ff
11: fwpr10000p0@fwln10000i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether a6:68:72:67:dc:1a brd ff:ff:ff:ff:ff:ff
12: fwln10000i0@fwpr10000p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr10000i0 state UP group default qlen 1000
    link/ether b2:1f:9a:2c:fd:0f brd ff:ff:ff:ff:ff:ff
default via 192.168.0.1 dev vmbr0 onlink
192.168.0.0/24 dev vmbr0 proto kernel scope link src 192.168.0.50

Code:
root@pve:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-3-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-6
pve-kernel-helper: 7.0-6
pve-kernel-5.4: 6.4-5
pve-kernel-5.11.22-3-pve: 5.11.22-6
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.2.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-5
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-10
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
openvswitch-switch: 2.15.0+ds1-2
proxmox-backup-client: 2.0.8-1
proxmox-backup-file-restore: 2.0.8-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-9
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
root@pve:~#


BTW, the containers using the bond don't come up either with (as a result of upper issue I trust):

Code:
()
run_buffer: 316 Script exited with status 2
lxc_create_network_priv: 3418 No such device - Failed to create network device
lxc_spawn: 1844 Failed to create the network
__lxc_start: 2073 Failed to spawn container "10500"
TASK ERROR: startup for container '10500' failed
 
do you haved tried with ifupdown2 ? (it's the default with proxmox7 fresh install)

syntax of /etc/network/interfaces is a little bit different (no more allow-ovs), but ifupdown2 install should change it out of the box.
 
Hi Spirit,

proxmox7 wasn't installed from scratch, I've upgraded from PE 6.4. The issue above I actually observed with PE6.4, which wasn't present with previous PE6.X versions. As some other threads were highlighting to use the latest openvswitch packages from Proxmox - not native Debian - I wanted to club this with the migration to PE7... in order to do troubleshooting only on the newest available version.

The node was installed initially under PE 6.1, for which Open OVS was recommended. With previous hardware I was running PE 5.x and ifupdown with no problems.

Can I easily change to ifupdown2 on existing NICs, etc. and assign this to the containers without further tweets ?

THX
 
Last edited:
Hi Spirit,

proxmox7 wasn't installed from scratch, I've upgraded from PE 6.4. The issue above I actually observed with PE6.4, which wasn't present with previous PE6.X versions. As some other threads were highlighting to use the latest openvswitch packages from Proxmox - not native Debian - I wanted to club this with the migration to PE7...

The node was installed initially under PE 6.1, for which Open OVS was recommended. With previous hardware I was running PE 5.x and ifupdown with no problems.

Can I easily change to ifupdown2 on existing NICs, etc. and assign this to the containers without further tweets ?

THX
yes, simply "apt install ifupdown2"

it should create a new /etc/network/interfaces.new config file (just verify that all is ok, "allow-ovs ..." should be convert to "auto ...",
then reboot or use the "reload network configuration" in the gui.
 
I performed the install, but got following error:

Code:
root@pve:~# apt-get install ifupdown2
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ifupdown2 is already the newest version (3.1.0-1+pmx3).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] Y
Setting up ifupdown2 (3.1.0-1+pmx3) ...

network config changes have been detected for ifupdown2 compatibility.
Saved in /etc/network/interfaces.new for hot-apply or next reboot.

Reloading network config on first install
warning: /etc/network/interfaces: line25: vlan010_internal: interface name too long
warning: /etc/network/interfaces: line39: vlan060_development: interface name too long
dpkg: error processing package ifupdown2 (--configure):
 installed ifupdown2 package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
 ifupdown2
E: Sub-process /usr/bin/dpkg returned an error code (1)

Anyhow, the interfaces.new was created with the amendments anticipated above. Unfortunately I've lost remote connection with rebooting the server and need to get screen / keyboard for console first... I will follow-up once done.

Cheers
 
I performed the install, but got following error:

Code:
root@pve:~# apt-get install ifupdown2
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ifupdown2 is already the newest version (3.1.0-1+pmx3).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] Y
Setting up ifupdown2 (3.1.0-1+pmx3) ...

network config changes have been detected for ifupdown2 compatibility.
Saved in /etc/network/interfaces.new for hot-apply or next reboot.

Reloading network config on first install
warning: /etc/network/interfaces: line25: vlan010_internal: interface name too long
warning: /etc/network/interfaces: line39: vlan060_development: interface name too long
dpkg: error processing package ifupdown2 (--configure):
 installed ifupdown2 package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
 ifupdown2
E: Sub-process /usr/bin/dpkg returned an error code (1)

Anyhow, the interfaces.new was created with the amendments anticipated above. Unfortunately I've lost remote connection with rebooting the server and need to get screen / keyboard for console first... I will follow-up once done.

Cheers

Did you get your issue resolved? I am wondering because we are having some issues with a fresh PVE 7 install and getting OVS to work. Perhaps your solution would assist us.
 
Hi Weehooey,

unfortunately I didn't get it resolved... I've tried to simplify /etc/network/interfaces file by only enabling the onboard / management port, but even this didn't come up after migrating to ifupdown2. Performing "ip -a" via console showed that the interface didn't pick any IP address.

Due to the fact that I want to optimize my server I decided to install Proxmox from scratch, currently awaiting some hardware components to arrive. I hope the fresh install with creation of default ifupdown2 networking as approach will solve the problem.

Cheers
 
  • Like
Reactions: weehooey-bh