Proxmox 6 - network won't start

sidereus

Member
Jul 25, 2019
45
6
13
53
I have a problem with a network during a boot.
Code:
pve-asrock-02 log # grep systemd syslog
...
Jul 25 19:38:16 pve-asrock-02 systemd[1]: ifupdown-pre.service: Main process exited, code=exited, status=1/FAILURE
Jul 25 19:38:16 pve-asrock-02 systemd[1]: ifupdown-pre.service: Failed with result 'exit-code'.
Jul 25 19:38:16 pve-asrock-02 systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.
Jul 25 19:38:16 pve-asrock-02 systemd[1]: Dependency failed for Raise network interfaces.
Jul 25 19:38:16 pve-asrock-02 systemd[1]: networking.service: Job networking.service/start failed with result 'dependency'.
...
Created /etc/rc.local and starting network from there.
Code:
#!/bin/sh -e
#
# rc.local
#
/usr/sbin/ifup vmbr0
My network setup is
Code:
allow-vmbr0 bond0
iface bond0 inet manual
    ovs_bonds eth0 eth2
    ovs_type OVSBond
    ovs_bridge vmbr0
    ovs_options lacp=active bond_mode=balance-slb

auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

auto vmbr0
iface vmbr0 inet static
    address  172.16.104.161
    netmask  24
    gateway  172.16.104.1
    ovs_type OVSBridge
    ovs_ports bond0
 
What reports
Code:
ip addr
Code:
pve-asrock-02 ~ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 0c:c4:7a:80:01:14 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ec4:7aff:fe80:114/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c4:7a:80:01:15 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 0c:c4:7a:80:01:16 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ec4:7aff:fe80:116/64 scope link
       valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c4:7a:80:01:17 brd ff:ff:ff:ff:ff:ff
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 8a:99:3c:17:c7:a8 brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 0c:c4:7a:80:01:14 brd ff:ff:ff:ff:ff:ff
    inet 172.16.104.161/24 brd 172.16.104.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:fe80:114/64 scope link
       valid_lft forever preferred_lft forever
8: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 2a:40:ab:2f:4f:8b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::2840:abff:fe2f:4f8b/64 scope link
       valid_lft forever preferred_lft forever
9: tap1002i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
    link/ether 86:0e:c4:d0:e7:07 brd ff:ff:ff:ff:ff:ff
11: tap1001i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
    link/ether c2:01:fe:e9:ad:e2 brd ff:ff:ff:ff:ff:ff

It's a fresh installation from the Proxmox 6 installation ISO image. Not upgrade from 5.
 
I did an upgrade from 5.4 to 6 on a testing cluster and got the same problem.

Is it possible that there is an openvswitch-package problem? I can't find them on pve-no-subscription for buster?
 
I believe that I have a similar problem to yours.
Does your network "start" when you remove the bridge and reboot your system?
 
Does your network "start" when you remove the bridge and reboot your system?
No, it doesn't.
Changed the network configuration to the following and rebooted Proxmox.
Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
    address  172.16.104.161
    netmask  24
    gateway  172.16.104.1

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual
Got the same:
Code:
pve-asrock-02 log # grep systemd syslog
...
Jul 31 12:58:05 pve-asrock-02 systemd[1]: ifupdown-pre.service: Main process exited, code=exited, status=1/FAILURE
Jul 31 12:58:05 pve-asrock-02 systemd[1]: ifupdown-pre.service: Failed with result 'exit-code'.
Jul 31 12:58:05 pve-asrock-02 systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.
Jul 31 12:58:05 pve-asrock-02 systemd[1]: Dependency failed for Raise network interfaces.
Jul 31 12:58:05 pve-asrock-02 systemd[1]: networking.service: Job networking.service/start failed with result 'dependency'.
...
Then started the network manually:
Code:
pve-asrock-02 log # ifup eth0
After that the network went up:
Code:
pve-asrock-02 log # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 0c:c4:7a:80:01:14 brd ff:ff:ff:ff:ff:ff
    inet 172.16.104.161/24 brd 172.16.104.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:fe80:114/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c4:7a:80:01:15 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c4:7a:80:01:16 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c4:7a:80:01:17 brd ff:ff:ff:ff:ff:ff
So, problem isn't related to the openvswitch.
 
PVE5.4:
debian openvswitch-swich 2.6.2~pre+git...
pve openvswitch-switch 2.7.0-3

PVE6
debian openvswitch-switch 2.10.0+2018.08.28+git...
pve openvswitch-switch missing

So, newer debian openvswitch, bud doesn't work? That's crazy, going to test too.
 
Hello everyone,

I can confirm the issue, just did a fresh install of PVE6.0-1 on a host which previously ran PVE5.4 just fine.
Now there's no network.

If I run journalctl -xe there is this error:
The unit ifupdown-pre.service hast entered the 'failed' state with result 'exit-code'.
[Timestamp][[hostname] systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.

Maybe that's a hint

kind regards
Mary-Jane
 
for
Code:
Jul 31 12:58:05 pve-asrock-02 systemd[1]: ifupdown-pre.service: Main process exited, code=exited, status=1/FAILURE
Jul 31 12:58:05 pve-asrock-02 systemd[1]: ifupdown-pre.service: Failed with result 'exit-code'.

it seem than buster ifupdown package have new systemd services (not existing in strech).

the content of /lib/system/system/ifupdown-pre.service is

Code:
[Unit]
Description=Helper to synchronize boot up for ifupdown
DefaultDependencies=no
Wants=systemd-udevd.service
After=systemd-udev-trigger.service
Before=network.target
[Service]
Type=oneshot
TimeoutSec=180
RemainAfterExit=yes
EnvironmentFile=-/etc/default/networking
ExecStart=/bin/sh -c 'if [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && [ -x /bin/udevadm ]; then udevadm settle; fi'

it could be intesting to see the result of the command

as workaround, you could also try to remove /lib/system/system/ifupdown-pre.service, and see if it's booting fine.
 
Hi All,

I've done a fresh install of PVE6.0-1 and got the following:


root@pve1:~# systemctl status ifupdown-pre.service

● ifupdown-pre.service - Helper to synchronize boot up for ifupdown

Loaded: loaded (/lib/systemd/system/ifupdown-pre.service; static; vendor preset: enabled)

Active: failed (Result: exit-code) since Thu 2019-11-14 09:03:05 GMT; 1h 13min ago

Process: 1419 ExecStart=/bin/sh -c if [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && [ -x /bin/udevadm ]; then udevadm sett

Main PID: 1419 (code=exited, status=1/FAILURE)



Nov 14 09:01:05 pve1 systemd[1]: Starting Helper to synchronize boot up for ifupdown...

Nov 14 09:03:05 pve1 systemd[1]: ifupdown-pre.service: Main process exited, code=exited, status=1/FAILURE

Nov 14 09:03:05 pve1 systemd[1]: ifupdown-pre.service: Failed with result 'exit-code'.

Nov 14 09:03:05 pve1 systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.

root@pve1:~#


Did anyone get this working as at the moment I'm having to do "ifup -a" at every reboot.

Any help appreciated, thanks.
 
Hi All,

I've done a fresh install of PVE6.0-1 and got the following:


root@pve1:~# systemctl status ifupdown-pre.service

● ifupdown-pre.service - Helper to synchronize boot up for ifupdown

Loaded: loaded (/lib/systemd/system/ifupdown-pre.service; static; vendor preset: enabled)

Active: failed (Result: exit-code) since Thu 2019-11-14 09:03:05 GMT; 1h 13min ago

Process: 1419 ExecStart=/bin/sh -c if [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && [ -x /bin/udevadm ]; then udevadm sett

Main PID: 1419 (code=exited, status=1/FAILURE)



Nov 14 09:01:05 pve1 systemd[1]: Starting Helper to synchronize boot up for ifupdown...

Nov 14 09:03:05 pve1 systemd[1]: ifupdown-pre.service: Main process exited, code=exited, status=1/FAILURE

Nov 14 09:03:05 pve1 systemd[1]: ifupdown-pre.service: Failed with result 'exit-code'.

Nov 14 09:03:05 pve1 systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.

root@pve1:~#


Did anyone get this working as at the moment I'm having to do "ifup -a" at every reboot.

Any help appreciated, thanks.
Can you post your network interfaces config file?
 
Hi spirit,
Here is my config file:

root@pve1:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
iface enp66s0f1 inet manual
iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual
iface enp66s0f0 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.0.11
netmask 255.255.255.0
gateway 192.168.0.1
bridge-ports enp66s0f1
bridge-stp off
bridge-fd 0

root@pve1:~#
 
maybe can you try to add

"auto enp66s0f1"
before
"iface enp66s0f1 inet manual"

?

you can still try to remove
"/lib/systemd/system/ifupdown-pre.service". I don't have it on upgraded proxmox5->6, and it still working fine

Alternatively, you could try to use 'ifupdown2" package. (apt install ifupdown2)
 
Thanks spirit, masking the service worked a treat. I used:

Code:
systemctl mask ifupdown-pre.service

and rebooted and all interfaces came up.
thanks again
 
Thanks for these help. With the command:

systemctl mask ifupdown-pre.service

Works well.

But I don't understand how several months have passed and with the latest version of proxmox the same thing continues to happen ... I imagine they have fixed it for the subscription repositories
 
stand how several months have passed and with the latest version of proxmox the same thing continues to happen ... I imagine they have fixed it for the subscription repositories
Generally, this is because of an hardware bug. ifupdown-pre.service is launching "udevadm settle", to be sure that all devices are present before starting network. if a bad device (or bad driver) is hanging, this give a timeout to the service.
 
With pve version 5.x works well... My hardware is ok, the problem is in my all machine.

The problem is the service systemd-udev-settle timeout:


2min 245ms systemd-udev-settle.service
1.964s pveproxy.service
1.798s systemd-modules-load.service
1.626s pve-guests.service
1.537s pvedaemon.service
1.059s pve-cluster.service
1.029s pve-ha-crm.service
1.025s pve-ha-lrm.service
966ms pvestatd.service
948ms pve-firewall.service
855ms pvesr.service
680ms lvm2-monitor.service
596ms dev-mapper-pve\x2droot.device
572ms spiceproxy.service
516ms postfix@-.service
448ms pvebanner.service
272ms lvm2-pvscan@8:3.service
243ms networking.service
195ms systemd-udev-trigger.service
194ms dev-pve-swap.swap
128ms systemd-journald.service
85ms rrdcached.service
78ms user@0.service
65ms zfs-share.service
62ms systemd-timesyncd.service
60ms systemd-logind.service
56ms ssh.service
44ms apparmor.service
42ms lxc.service
36ms rsyslog.service
35ms qmeventd.service
31ms smartmontools.service
30ms pve-lxc-syscalld.service
30ms systemd-udevd.service
30ms ksmtuned.service
29ms dev-mqueue.mount
28ms run-rpc_pipefs.mount
27ms systemd-tmpfiles-setup.service
27ms kmod-static-nodes.service
26ms keyboard-setup.service
24ms sys-kernel-debug.mount
24ms pvefw-logger.service
22ms systemd-remount-fs.service
20ms pvenetcommit.service
20ms systemd-sysusers.service
20ms rpcbind.service
18ms open-iscsi.service
18ms zfs-volume-wait.service
18ms systemd-journal-flush.service
17ms systemd-tmpfiles-setup-dev.service
16ms lxc-net.service
16ms zfs-mount.service
16ms iscsid.service
14ms systemd-random-seed.service
14ms dev-hugepages.mount
13ms systemd-update-utmp.service
12ms user-runtime-dir@0.service
12ms systemd-update-utmp-runlevel.service
11ms blk-availability.service
10ms systemd-user-sessions.service
9ms systemd-sysctl.service
7ms console-setup.service
6ms rbdmap.service
5ms nfs-config.service
4ms sys-kernel-config.mount
3ms sys-fs-fuse-connections.mount
2ms postfix.service



● systemd-udev-settle.service - udev Wait for Complete Device Initialization
Loaded: loaded (/lib/systemd/system/systemd-udev-settle.service; static; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-02-14 11:13:58 CET; 7min ago
Docs: man:udev(7)
man:systemd-udevd.service(8)
Process: 568 ExecStart=/bin/udevadm settle (code=exited, status=1/FAILURE)
Main PID: 568 (code=exited, status=1/FAILURE)

Feb 14 11:11:58 hp2-blade6-pve systemd[1]: Starting udev Wait for Complete Device Initialization...
Feb 14 11:13:58 hp2-blade6-pve systemd[1]: systemd-udev-settle.service: Main process exited, code=exited, status=1/FAILURE
Feb 14 11:13:58 hp2-blade6-pve systemd[1]: systemd-udev-settle.service: Failed with result 'exit-code'.
Feb 14 11:13:58 hp2-blade6-pve systemd[1]: Failed to start udev Wait for Complete Device Initialization.
 
udev-settle service is use to wait for all devices to be initialized before network start.
if it's hanging, it's because you have a buggy device initilization (could be hardward/drivers/...) , including devices not related to network.

you can simply do a " systemctl mask ifupdown-pre.service" to avoid launching udev-settle
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!