Warning - openvswitch internal ports are gone during update to ovs 2.5.0

udo

Distinguished Member
Apr 22, 2009
5,977
199
163
Ahrensburg; Germany
Hi,
this morning all of our OVSIntPort are lost, due to the openvswitch update from '2.3.2-3' to '2.5.0-1'.

We provide openvswitch with puppet:
Code:
  package { 'openvswitch-switch':
  ensure => latest }
and due the update in this night all internal Ports are down:
Code:
May 20 03:01:38 prox-02 systemd-timesyncd[1350]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.021s/0.001s/+44ppm
May 20 03:01:41 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:3] Unknown lvalue 'f' in section 'Unit'
May 20 03:01:41 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:4] Unknown lvalue 'e.keyring,if' in section 'Unit'
May 20 03:01:42 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:3] Unknown lvalue 'f' in section 'Unit'
May 20 03:01:42 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:4] Unknown lvalue 'e.keyring,if' in section 'Unit'
May 20 03:01:42 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:3] Unknown lvalue 'f' in section 'Unit'
May 20 03:01:42 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:4] Unknown lvalue 'e.keyring,if' in section 'Unit'
May 20 03:01:42 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:3] Unknown lvalue 'f' in section 'Unit'
May 20 03:01:42 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:4] Unknown lvalue 'e.keyring,if' in section 'Unit'
May 20 03:01:42 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:3] Unknown lvalue 'f' in section 'Unit'
May 20 03:01:42 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:4] Unknown lvalue 'e.keyring,if' in section 'Unit'
May 20 03:01:42 prox-02 systemd[1]: Failed to reset devices.list on /system.slice: Invalid argument
May 20 03:01:42 prox-02 ovs-ctl[19289]: Exiting ovs-vswitchd (1468).
May 20 03:01:42 prox-02 kernel: [396020.392851] device vmbr0 left promiscuous mode
May 20 03:01:42 prox-02 kernel: [396020.430481] device eth1 left promiscuous mode
May 20 03:01:42 prox-02 kernel: [396020.430822] device vlan23 left promiscuous mode
May 20 03:01:43 prox-02 ovs-ctl[19289]: Exiting ovsdb-server (1457).
May 20 03:01:43 prox-02 ovs-ctl[19325]: Backing up database to /etc/openvswitch/conf.db.backup7.6.2-3478940432.
May 20 03:01:43 prox-02 ovs-ctl[19325]: Compacting database.
May 20 03:01:43 prox-02 ovs-ctl[19325]: Converting database schema.
May 20 03:01:43 prox-02 ovs-ctl[19325]: Starting ovsdb-server.
May 20 03:01:43 prox-02 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.12.1
May 20 03:01:43 prox-02 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.5.0 "external-ids:system-id=\"a6e8c250-a978-4bd2-a65d-8247ad4009a3\"" "system-type=\"Debian\"" "system-version=\"8.4-jessie\""
May 20 03:01:43 prox-02 ovs-ctl[19325]: Configuring Open vSwitch system IDs.
May 20 03:01:43 prox-02 kernel: [396020.691145] device vlan23 entered promiscuous mode
May 20 03:01:43 prox-02 kernel: [396020.691342] device eth1 entered promiscuous mode
May 20 03:01:43 prox-02 kernel: [396020.715556] device vmbr0 entered promiscuous mode
May 20 03:01:43 prox-02 ovs-ctl[19325]: Starting ovs-vswitchd.
May 20 03:01:43 prox-02 ovs-ctl[19325]: Enabling remote OVSDB managers.
May 20 03:01:43 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:3] Unknown lvalue 'f' in section 'Unit'
May 20 03:01:43 prox-02 systemd[1]: [/run/systemd/system/410.scope.d/50-Description.conf:4] Unknown lvalue 'e.keyring,if' in section 'Unit'
May 20 03:01:43 prox-02 puppet-agent[18040]: (/Stage[main]/Proxmox/Package[openvswitch-switch]/ensure) ensure changed '2.3.2-3' to '2.5.0-1'
an manual "ifconfig vlan23 172.20.xx.xx netmask 255.255.255.0 up" work well...

Be sure to have an console-access (or an alternative access) if you update openvswitch!

Udo
 
This explains why my Proxmox VE 4 node lost contact with the network after the upgrade. After finishing the upgrade on the console
I rebooted the server. But still the openvswitch ports are gone.
 
Yes. They are. After further research I found out that the network service somehow didn't starter after reboot.
So the update had disabled the network service. After I started it manually the server got contact with the internet
again. Had to also start several other services like rpcbind. I wonder if this update error is reported to the Proxmox team?
 
So even after a reboot any "ovs_type OVSIntPort" entries in /etc/network/interfaces won't come back online? I can handle a temporary outage during an upgrade, not a permanent outage. Any workarounds for starting at boot properly?
 
I don't know. This is what I did step for step:

1. I disabled the openvswitch settings in the /etc/network/interfaces file and reverted to default proxmox setup.
2. Then I restarted the server. Server had still no contact with the network. Only the "lo" showed up in the ifconfig command.
3. Then I found out that the network, rpcbind and so services didn't start after reboot. I manually started these services and
got back the network connection. Since it was late and the server is used for nighly backup I haven't checked the error further.
Haven't got back to enabling openvswitch on the server.
 
So even after a reboot any "ovs_type OVSIntPort" entries in /etc/network/interfaces won't come back online? I can handle a temporary outage during an upgrade, not a permanent outage. Any workarounds for starting at boot properly?

can you post your /etc/network/interfaces ?

I'll try to test on my side

what is the output of :

#systemctl status openvswitch.service
 
Sorry for the confusion, my reply was a question not a statement. I was planning on testing the upgrade to OVS 2.5 soon and ran across this thread, so I was trying to determine how severe the issue is before proceeding in order to try to save myself some stress. I haven't personally hit this yet since I haven't tried :)

If other people have had success with OVS 2.5 and OVSIntPort's still worked for them after the upgrade (or at least after a reboot after the upgrade), then I should be good to go.
 
can you post your /etc/network/interfaces ?

I'll try to test on my side

what is the output of :

#systemctl status openvswitch.service
Hi Spirit,
Halgeir is right - the network dosn't came back after reboot.

Yesterday I reboot one of the ovs-2.5-server (after adding multipath for FC) and it's start without network - only lo was enabled.

After some tests and research I found
a) that something wrong with the ovs db (I removed them) and
b) that networking must start again after boot to bring up the interfaces again... this can be an workaround only.

systemd-analyse:
Code:
graphical.target @2min 45.770s
└─multi-user.target @2min 45.770s
  └─pve-manager.service @1min 45.263s +1min 506ms
  └─spiceproxy.service @1min 44.791s +458ms
  └─pveproxy.service @1min 44.187s +570ms
  └─ceph.service @1min 45.504s +1.045s
  └─pve-cluster.service @40.735s +1.358s
  └─rrdcached.service @39.102s +1.592s
  └─basic.target @38.798s
  └─timers.target @38.793s
  └─systemd-tmpfiles-clean.timer @38.793s
  └─sysinit.target @38.772s
  └─networking.service @1min 22.067s +1.032s
  └─openvswitch-switch.service @39.067s +1.073s
  └─basic.target @38.798s
  └─...
part of Logfiles
Code:
Jun 02 08:05:06 prox-01 systemd-sysv-generator[493]: Ignoring creation of an alias umountiscsi.service for itself
Jun 02 08:05:06 prox-01 systemd[1]: Cannot add dependency job for unit display-manager.service, ignoring: Unit display-manager.service failed to load: No such file or directory.
Jun 02 08:05:06 prox-01 systemd[1]: Found ordering cycle on basic.target/start
Jun 02 08:05:06 prox-01 systemd[1]: Found dependency on sysinit.target/start
Jun 02 08:05:06 prox-01 systemd[1]: Found dependency on networking.service/start
Jun 02 08:05:06 prox-01 systemd[1]: Found dependency on openvswitch-switch.service/start
Jun 02 08:05:06 prox-01 systemd[1]: Found dependency on basic.target/start
Jun 02 08:05:06 prox-01 systemd[1]: Breaking ordering cycle by deleting job networking.service/start
Jun 02 08:05:06 prox-01 systemd[1]: Job networking.service/start deleted to break ordering cycle starting with basic.target/start

Jun 02 08:05:44 prox-01 openvswitch-switch[6465]: Inserting openvswitch module.
Jun 02 08:05:44 prox-01 kernel: openvswitch: Open vSwitch switching datapath
Jun 02 08:05:44 prox-01 ovs-ctl[6230]: Creating empty database /etc/openvswitch/conf.db.
Jun 02 08:05:44 prox-01 ovsdb-server[6960]: ovs|00002|daemon_unix|EMER|/var/run/openvswitch/ovsdb-server.pid.tmp: fcntl(F_SETLK) failed (Resource temporarily unavailable)
Jun 02 08:05:44 prox-01 ovs-ctl[6230]: ovsdb-server: /var/run/openvswitch/ovsdb-server.pid.tmp: fcntl(F_SETLK) failed (Resource temporarily unavailable)
Jun 02 08:05:44 prox-01 kernel: Process accounting resumed
Jun 02 08:05:44 prox-01 ovs-ctl[6230]: Starting ovsdb-server ... failed!
Jun 02 08:05:44 prox-01 openvswitch-switch[6465]: Starting ovsdb-server.
Jun 02 08:05:44 prox-01 ovs-vsctl[6964]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.12.1
Jun 02 08:05:44 prox-01 ovs-vsctl[6963]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.12.1

Jun 02 08:05:45 prox-01 ovs-vsctl[7008]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.5.0 "external-ids:system-id=\"92e58718-30d9-4579-8ae0-db0325a2d041\"" "system
Jun 02 08:05:45 prox-01 ovs-vsctl[7009]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.5.0 "external-ids:system-id=\"92e58718-30d9-4579-8ae0-db0325a2d041\"" "system
Jun 02 08:05:45 prox-01 ovs-ctl[6230]: Configuring Open vSwitch system IDs.
Jun 02 08:05:45 prox-01 openvswitch-switch[6465]: Configuring Open vSwitch system IDs.
Jun 02 08:05:45 prox-01 ovs-vswitchd[7019]: ovs|00002|daemon_unix|EMER|/var/run/openvswitch/ovs-vswitchd.pid.tmp: fcntl(F_SETLK) failed (Resource temporarily unavailable)
Jun 02 08:05:45 prox-01 openvswitch-switch[6465]: ovs-vswitchd: /var/run/openvswitch/ovs-vswitchd.pid.tmp: fcntl(F_SETLK) failed (Resource temporarily unavailable)
Jun 02 08:05:45 prox-01 openvswitch-switch[6465]: Starting ovs-vswitchd ... failed!
Jun 02 08:05:45 prox-01 openvswitch-switch[6465]: Enabling remote OVSDB managers.

Jun 02 08:05:45 prox-01 ovs-ctl[6230]: Starting ovs-vswitchd.
Jun 02 08:05:45 prox-01 ovs-ctl[6230]: Enabling remote OVSDB managers.

Jun 02 08:05:47 prox-01 corosync[7200]: [TOTEM ] The network interface is down.
Jun 02 08:05:47 prox-01 corosync[7200]: [SERV  ] Service engine loaded: corosync configuration map access [0]

########## login and do an "service start networking" ######
Jun  2 08:05:58 prox-01 login[6997]: pam_unix(login:session): session opened for user root by LOGIN(uid=0)
Jun  2 08:05:58 prox-01 login[9228]: ROOT LOGIN  on '/dev/tty1'
############################################################

Jun 02 08:06:27 prox-01 kernel: ixgbe 0000:04:00.0 eth0: detected SFP+: 3
Jun 02 08:06:27 prox-01 kernel: igb 0000:06:00.0 eth2: igb: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Jun 02 08:06:27 prox-01 ovs-vsctl[9756]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --may-exist add-br vmbr0 --
Jun 02 08:06:27 prox-01 kernel: device ovs-system entered promiscuous mode
Jun 02 08:06:27 prox-01 kernel: device vmbr0 entered promiscuous mode
Jun 02 08:06:27 prox-01 ovs-vsctl[9794]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --may-exist add-port vmbr0 eth1 --
Jun 02 08:06:27 prox-01 kernel: device eth1 entered promiscuous mode
Jun 02 08:06:27 prox-01 kernel: ixgbe 0000:04:00.0 eth0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
Jun 02 08:06:27 prox-01 kernel: ixgbe 0000:04:00.1: registered PHC device on eth1
Jun 02 08:06:27 prox-01 kernel: IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
Jun 02 08:06:27 prox-01 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Jun 02 08:06:27 prox-01 sshd[6447]: Received SIGHUP; restarting.
Jun 02 08:06:27 prox-01 sshd[6447]: Server listening on 0.0.0.0 port 22.
Jun 02 08:06:27 prox-01 sshd[6447]: Server listening on :: port 22.
Jun 02 08:06:27 prox-01 kernel: ixgbe 0000:04:00.1 eth1: detected SFP+: 4
version:
Code:
root@prox-01:~# pveversion -v
proxmox-ve: 4.2-51 (running kernel: 4.4.8-1-pve)
pve-manager: 4.2-5 (running version: 4.2-5/7cf09667)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.8-1-pve: 4.4.8-51
pve-kernel-4.2.8-1-pve: 4.2.8-41
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-75
pve-firmware: 1.1-8
libpve-common-perl: 4.0-62
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-17
pve-container: 1.0-64
pve-firewall: 2.0-27
pve-ha-manager: 1.0-31
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
openvswitch-switch: 2.5.0-1
ceph: 0.94.7-1jessie
Any hints?

Udo
 
this is strange, conf.db should be deleted and regenerated by /etc/network/interfaces at each boot.

what is your /etc/network/interfaces content ?
Hi,
looks for me not interface-file-related, because I changes some things to be sure that this isn't an problem.

Code:
root@prox-01:~# cat /etc/network/interfaces
# Achtung - wird per Puppet verteilt!!
#
#
# network interface settings;
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
  address  172.20.2.61
  netmask  255.255.255.0
  mtu  9000

allow-vmbr0 eth1
iface eth1 inet manual
  ovs_type OVSPort
  ovs_bridge vmbr0

#iface eth2 inet manual
auto eth2
iface eth2 inet static
  address  172.20.3.61
  netmask  255.255.255.0
  gateway  172.20.3.254

iface eth3 inet manual

auto vmbr0
iface vmbr0 inet manual
  ovs_type OVSBridge
  #ovs_ports eth1 vlan23
  ovs_ports eth1

#allow-vmbr0 vlan23
#iface vlan23 inet static
#  address  172.20.3.61
#  netmask  255.255.255.0
#  gateway  172.20.3.254
#  ovs_type OVSIntPort
#  ovs_bridge vmbr0
#  ovs_options tag=23
Udo
 
this is strange, conf.db should be deleted and regenerated by /etc/network/interfaces at each boot.
Hi,
about conf.db: openvswitch say, that the lock-file already exist (which is right)
Code:
root@prox-01:/etc/openvswitch# ls -lsa
total 96
 4 drwxr-xr-x  2 root root  4096 Jun  1 19:30 .
 4 drwxr-xr-x 105 root root  4096 Jun  1 19:27 ..
16 -rw-r--r--  1 root root 13917 Jun  1 19:30 conf.db
16 -rw-r--r--  1 root root 12469 Mar  8 17:19 conf.db.backup7.6.0-1731605290
52 -rw-r--r--  1 root root 51320 May 20 05:20 conf.db.backup7.6.2-3478940432
 0 lrwxrwxrwx  1 root root  36 Mar  8 15:19 .conf.db.~lock~ -> /var/lib/openvswitch/.conf.db.~lock~
 0 -rw-------  1 root root  0 Mar  8 17:19 .conf.db.tmp.~lock~
 4 -rw-r--r--  1 root root  37 Mar  8 15:19 system-id.conf

ls -lsa /var/lib/openvswitch/
total 24
 4 drwxr-xr-x  2 root root  4096 Mar  8 15:19 .
 4 drwxr-xr-x 48 root root  4096 Jun  1 19:20 ..
16 -rw-r--r--  1 root root 12469 Mar  8 15:19 conf.db
 0 -rw-------  1 root root  0 Mar  8 15:19 .conf.db.~lock~
This was the reason, why I move both openvswitch dirs and recreate them.

Udo
 
Sorry. I've couldn't help you further before. Look like you have found the problem without my help.
When I look in the openvswitch-dirs on the affected server I find similar files like Udo listed above.

But is the issue now solved?

I have a production server which is not updated for a while. It's running Proxmox VE 4.1.
And it also uses openvswitch.
Will a upgrade affect this server too? (Don't dare to upgrade it if this issue is not solved).
 
Sorry. I've couldn't help you further before. Look like you have found the problem without my help.
When I look in the openvswitch-dirs on the affected server I find similar files like Udo listed above.

But is the issue now solved?

I have a production server which is not updated for a while. It's running Proxmox VE 4.1.
And it also uses openvswitch.
Will a upgrade affect this server too? (Don't dare to upgrade it if this issue is not solved).
Hi,
no it's not solved yet... It's must be something to do with systemd, but I haven't find the issue yet.

I have 3 very comparable hosts (same software, mostly same hardware). On two of them the issue occour, on the third not.

Udo
 
Hi,
For me this issue occour on one of two ibm x3650m4-systems.
Only difference should be network card, but with same chip: Intel 10-Gigabit X540-AT2
Latest packages from pvetest.
My interfaces:

Code:
auto lo
iface lo inet loopback

allow-vmbr0 eth0
# 1Gbps link to core switch
iface eth0 inet manual
  ovs_bridge vmbr0
  ovs_type OVSPort
  ovs_options other_config:rstp-enable=true other_config:rstp-port-path-cost=100
  mtu 9000

allow-vmbr0 eth1
# 1Gbps link to secondary core switch
iface eth1 inet manual
  ovs_bridge vmbr0
  ovs_type OVSPort
  ovs_options other_config:rstp-enable=true other_config:rstp-port-path-cost=100
  mtu 9000

allow-vmbr0 eth4
# 10Gbps link to another proxmox/ceph node
iface eth4 inet manual
  ovs_bridge vmbr0
  ovs_type OVSPort
  ovs_options other_config:rstp-enable=true other_config:rstp-port-path-cost=10
  mtu 9000

allow-vmbr0 eth5
# 10Gbps link to another proxmox/ceph node
iface eth5 inet manual
  ovs_bridge vmbr0
  ovs_type OVSPort
  ovs_options other_config:rstp-enable=true other_config:rstp-port-path-cost=10
  mtu 9000

auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
  ovs_type OVSBridge
  ovs_ports eth0 eth1 eth4 eth5 vlan1 vlan50 vlan55 vlan56 vlan60 vlan66
  up ovs-vsctl set Bridge ${IFACE} rstp_enable=true other_config:rstp-priority=32768 other_config:rstp-forward-delay=4 other_config:rstp-max-age=6
  mtu 9000
  # Wait for spanning-tree convergence
  post-up sleep 10

# Virtual interface to take advantage of originally untagged traffic
allow-vmbr0 vlan1
iface vlan1 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options vlan_mode=access
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 192.168.1.16
  netmask 255.255.255.0
  gateway 192.168.1.253
  mtu 1500

# Proxmox cluster communication vlan
allow-vmbr0 vlan50
iface vlan50 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=50
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 192.168.192.16
  netmask 255.255.255.0
  mtu 1500

# Ceph cluster communication vlan (jumbo frames)
allow-vmbr0 vlan55
iface vlan55 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=55
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 192.168.190.16
  netmask 255.255.255.0
  mtu 9000

# Ceph public communication vlan (jumbo frames)
allow-vmbr0 vlan56
iface vlan56 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0  
  ovs_options tag=56
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 192.168.0.16
  netmask 255.255.255.0
  mtu 9000

# monitoring communication vlan
allow-vmbr0 vlan60
iface vlan60 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=60
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 10.0.0.16
  netmask 255.255.255.0
  mtu 1500

# wlan vlan
allow-vmbr0 vlan66
iface vlan66 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=66
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 10.59.0.16
  netmask 255.255.255.0
  mtu 1500

Which Infos could be helpful?

Greetings

Markus
 
Hi all,
fixed that for one node (the second came in a short time).

openvswitch shows a very short initial time, because it stars to early:
Code:
systemd-analyze blame | grep openvswitch
  5.212s openvswitch-nonetwork.service
  1.523s openvswitch-switch.service
  788us openvswitch.service
Due my ceph-istallation (installed a to new version and go back to 0.94.7) and the openvswitch trouble I must change some rc-d-links
("systemctl --reverse | grep ceph" shows multible mon-services)
Code:
root@prox-01:~# rm /etc/rc0.d/K02ceph
root@prox-01:~# rm /etc/rc1.d/K10openvswitch-switch
root@prox-01:~# rm /etc/rc2.d/S03ceph
root@prox-01:~# rm /etc/rc3.d/S01openvswitch-switch
root@prox-01:~# rm /etc/rc3.d/S03ceph
root@prox-01:~# rm /etc/rc4.d/S01openvswitch-switch
root@prox-01:~# rm /etc/rc4.d/S03ceph
root@prox-01:~# rm /etc/rc5.d/S01openvswitch-switch
root@prox-01:~# rm /etc/rc6.d/K02ceph
root@prox-01:~# ln -s ../init.d/openvswitch-switch /etc/rcS.d/S13openvswitch-switch # after mountall-bootclean.sh!! Perhaps S12...
Result:
Code:
root@prox-01:~# find /etc/rc* -ls | egrep openvswitch\|ceph
263216  0 lrwxrwxrwx  1 root  root  28 Jun  1 19:27 /etc/rc0.d/K10openvswitch-switch -> ../init.d/openvswitch-switch
263222  0 lrwxrwxrwx  1 root  root  14 Jun  1 18:55 /etc/rc1.d/K02ceph -> ../init.d/ceph
264065  0 lrwxrwxrwx  1 root  root  28 May 20 05:20 /etc/rc2.d/S01openvswitch-switch -> ../init.d/openvswitch-switch
263439  0 lrwxrwxrwx  1 root  root  14 Jun  1 18:55 /etc/rc5.d/S03ceph -> ../init.d/ceph
263440  0 lrwxrwxrwx  1 root  root  28 Jun  1 19:27 /etc/rc6.d/K10openvswitch-switch -> ../init.d/openvswitch-switch
263218  0 lrwxrwxrwx  1 root  root  28 Jun  6 13:53 /etc/rcS.d/S13openvswitch-switch -> ../init.d/openvswitch-switch
And the boot work like expected!

Before:
Code:
# systemd-analyze critical-chain
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

graphical.target @2min 24.675s
└─multi-user.target @2min 24.675s
  └─pve-manager.service @1min 24.152s +1min 523ms
  └─spiceproxy.service @1min 23.700s +442ms
  └─pveproxy.service @1min 21.968s +1.722s
  └─pvedaemon.service @1min 20.316s +1.640s
  └─basic.target @13.604s
  └─timers.target @13.604s
  └─systemd-tmpfiles-clean.timer @13.604s
  └─sysinit.target @13.604s
  └─apparmor.service @12.424s +1.179s
  └─remote-fs.target @12.414s
  └─remote-fs-pre.target @12.414s
  └─open-iscsi.service @11.409s +1.004s
  └─local-fs.target @11.384s
  └─run-cgmanager-fs.mount @14.978s
  └─local-fs-pre.target @10.920s
  └─systemd-remount-fs.service @10.898s +20ms
  └─zfs-mount.service @10.872s +25ms
  └─zfs-import-scan.service @8.428s +2.428s
  └─cryptsetup.target @8.417s
Now:
Code:
systemd-analyze critical-chain
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

graphical.target @17.061s
└─multi-user.target @17.061s
  └─pve-manager.service @16.566s +495ms
  └─spiceproxy.service @16.045s +493ms
  └─pveproxy.service @14.310s +1.678s
  └─pvedaemon.service @11.865s +2.425s
  └─corosync.service @10.998s +848ms
  └─pve-cluster.service @9.657s +1.317s
  └─rrdcached.service @8.943s +604ms
  └─basic.target @8.928s
  └─sockets.target @8.928s
  └─dbus.socket @8.928s
  └─sysinit.target @8.928s
  └─nfs-common.service @8.397s +530ms
  └─rpcbind.target @8.381s
  └─rpcbind.service @8.138s +243ms
  └─network-online.target @8.111s
  └─network.target @8.111s
  └─networking.service @7.093s +1.017s
  └─openvswitch-switch.service @5.766s +1.311s
  └─local-fs.target @5.722s
  └─etc-pve.mount @9.955s
  └─local-fs-pre.target @5.469s
  └─systemd-remount-fs.service @5.448s +20ms
Udo
 
Doing a system upgrade and therefore upgrading to openvswitch 2.5) is still breaking upgrades - is there an ETA on when this will be addressed?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!