Proxmox VE 8.0 (beta) released!

I am also in - having the same problem.

PVE hangs when booting and waits for networking.service

A workaround for this is.
  1. boot via Grub into Recovery Mode

  2. Code:
    systemctl disable networking

  3. reboot normal kernel

  4. Code:
    systemctl start networking

  5. start all virtual machines and containers manually (they are all shutdown, because the interface was not present during / after booting).
this way, "networking" is permanently switched off even during reboots and only has to be started after the reboot.
As soon as the issue is resolved, you can set this back to "enable".

I also got the following from the log:
Code:
ifupdown2: main.py:85:main(): error: main exception: name 'traceback' is not defined

However, this may just be a coincidence and have nothing to do with the issue. I hope this will be fixed soon, as I have a headless unit running.
Hi, I have found a bug in ifupdown2 with traceback, I'm not sure it's 100% the same error than you, but can you try to edit:

/usr/share/ifupdown2/ifupdown/scheduler.py

and add "import traceback" just after "import sys" like this:

Code:
import os
import sys
import traceback

also, do you have any error debug log in "/var/log/ifupdown2/ " ?
 
Hi, I have found a bug in ifupdown2 with traceback, I'm not sure it's 100% the same error than you, but can you try to edit:

/usr/share/ifupdown2/ifupdown/scheduler.py

and add "import traceback" just after "import sys" like this:

Code:
import os
import sys
import traceback

also, do you have any error debug log in "/var/log/ifupdown2/ " ?
@spirit
Thanks! already did that.

Here are my debug-logs
 

Attachments

  • log1.txt
    8.4 KB · Views: 1
  • log2.txt
    22.6 KB · Views: 1
  • log3.txt
    8.3 KB · Views: 2
Last edited:
After my upgrade to 8 all but one container works fine. The CT is my big Docker CT and i use that one alot. now it wont start.
the error i got is:
run_buffer: 322 Script exited with status 255
lxc_init: 844 Failed to run lxc.hook.pre-start for container "102"
__lxc_start: 2027 Failed to initialize container "102"
TASK ERROR: startup for container '102' failed
-------------------
How can i fix this? Thanks for a excellent hypervisor btw.
 
Hi,
After my upgrade to 8 all but one container works fine. The CT is my big Docker CT and i use that one alot. now it wont start.
the error i got is:
run_buffer: 322 Script exited with status 255
lxc_init: 844 Failed to run lxc.hook.pre-start for container "102"
__lxc_start: 2027 Failed to initialize container "102"
TASK ERROR: startup for container '102' failed
-------------------
How can i fix this? Thanks for a excellent hypervisor btw.
please post the output of pct start 102 --debug and the contents of /run/pve/ct-102.stderr. Note that it's highly recommended to run docker within a VM instead of within a container.
 
Here are my debug-logs
mmm, don't find something usefull. I think that it's crashing without logging here.
(can you try to add the "import traceback" in /usr/share/ifupdown2/ifupdown/scheduler.py ?)
FYI, ifupdown2 in version 3.2.0-1+pmx3 that got the fixes for that which you sent, is already available on no-subscription, so upgrading should address this too - at least if it isn't another issue.
 
  • Like
Reactions: Chrischi and spirit
Any idea why my network won't start after updating from 7.4?
I can disable networking and bring it up manually fine
Code:
root@epyc:~# ifconfig
enp6s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 40:a6:b7:55:f9:54  txqueuelen 1000  (Ethernet)
        RX packets 14607  bytes 1940108 (1.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7171  bytes 1934710 (1.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 160  bytes 51781 (50.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 160  bytes 51781 (50.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.111.111  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 40:a6:b7:55:f9:54  txqueuelen 1000  (Ethernet)
        RX packets 14607  bytes 1735610 (1.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7006  bytes 1923856 (1.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
Logs from when it hangs at boot:
Code:
root@epyc:~# cat /var/log/ifupdown2/network_config_ifupdown2_16_Jul-01-2023_12\:15\:58.809373/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto enp6s0f1
iface enp6s0f1 inet manual

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto enp6s0f0
iface enp6s0f0 inet manual

auto vlan1
iface vlan1 inet static
    address 192.168.111.111/24
    gateway 192.168.111.1
    vlan-raw-device enp6s0f0
Code:
root@epyc:~# cat /var/log/ifupdown2/network_config_ifupdown2_16_Jul-01-2023_12\:15\:58.809373/ifupdown2.debug.log
2023-07-01 12:15:58,809: MainThread: ifupdown2: log.py:196:__init_debug_logging(): debug: persistent debugging is initialized
2023-07-01 12:15:58,906: MainThread: ifupdown2: main.py:229:run_query(): debug: args = Namespace(all=False, iflist=[], verbose=False, debug=False, quiet=False, CLASS=['mgmt'], withdepends=False, perfmode=False, nocache=False, excludepats=None, interfacesfile=None, interfacesfileformat='native', type=None, list=True, running=False, checkcurr=False, raw=False, printsavedstate=False, format='native', printdependency=None, syntaxhelp=False, withdefaults=False, version=None, nldebug=False)
2023-07-01 12:15:58,907: MainThread: ifupdown2: main.py:253:run_query(): debug: creating ifupdown object ..
2023-07-01 12:15:58,907: MainThread: ifupdown2.NetlinkListenerWithCache: nlcache.py:2423:get_all_links_wait_netlinkq(): info: requesting link dump
2023-07-01 12:15:58,910: MainThread: ifupdown2.NetlinkListenerWithCache: nlcache.py:2435:get_all_addresses_wait_netlinkq(): info: requesting address dump
2023-07-01 12:15:58,910: MainThread: ifupdown2.NetlinkListenerWithCache: nlcache.py:2445:get_all_netconf_wait_netlinkq(): info: requesting netconf dump
2023-07-01 12:15:58,912: MainThread: ifupdown2.NetlinkListenerWithCache: nlcache.py:2246:reset_errorq(): debug: nlcache: reset errorq
2023-07-01 12:15:58,912: MainThread: ifupdown: ifupdownmain.py:329:__init__(): debug: {'enable_persistent_debug_logging': 'yes', 'use_daemon': 'no', 'template_enable': '1', 'template_engine': 'mako', 'template_lookuppath': '/etc/network/ifupdown2/templates', 'default_interfaces_configfile': '/etc/network/interfaces', 'disable_cli_interfacesfile': '0', 'addon_syntax_check': '0', 'addon_scripts_support': '1', 'addon_python_modules_support': '1', 'multiple_vlan_aware_bridge_support': '1', 'ifquery_check_success_str': 'pass', 'ifquery_check_error_str': 'fail', 'ifquery_check_unknown_str': '', 'ifquery_ifacename_expand_range': '0', 'link_master_slave': '0', 'delay_admin_state_change': '0', 'ifreload_down_changed': '0', 'addr_config_squash': '0', 'ifaceobj_squash': '0', 'adjust_logical_dev_mtu': '1', 'state_dir': '/run/network/'}
2023-07-01 12:15:58,912: MainThread: ifupdown: ifupdownmain.py:1424:load_addon_modules(): info: loading builtin modules from ['/usr/share/ifupdown2/addons']
2023-07-01 12:15:58,917: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module openvswitch not loaded (module init failed: no /usr/bin/ovs-vsctl found)
2023-07-01 12:15:58,918: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module openvswitch_port not loaded (module init failed: no /usr/bin/ovs-vsctl found)
2023-07-01 12:15:58,929: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module ppp not loaded (module init failed: no /usr/bin/pon found)
2023-07-01 12:15:58,930: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing modprobe -q bonding
2023-07-01 12:15:58,940: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module batman_adv not loaded (module init failed: no /usr/sbin/batctl found)
2023-07-01 12:15:58,956: MainThread: ifupdown.bridge: bridge.py:644:__init__(): debug: bridge: using reserved vlan range (0, 0)
2023-07-01 12:15:58,956: MainThread: ifupdown.bridge: bridge.py:664:__init__(): debug: bridge: init: warn_on_untagged_bridge_absence=False
2023-07-01 12:15:58,956: MainThread: ifupdown.bridge: bridge.py:671:__init__(): debug: bridge: init: vxlan_bridge_default_igmp_snooping=None
2023-07-01 12:15:58,956: MainThread: ifupdown.bridge: bridge.py:680:__init__(): debug: bridge: init: arp_nd_suppress_only_on_vxlan=False
2023-07-01 12:15:58,956: MainThread: ifupdown.bridge: bridge.py:686:__init__(): debug: bridge: init: bridge_always_up_dummy_brport=None
2023-07-01 12:15:58,956: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /sbin/sysctl net.bridge.bridge-allow-multiple-vlans
2023-07-01 12:15:58,957: MainThread: ifupdown.bridge: bridge.py:696:__init__(): debug: bridge: init: multiple vlans allowed True
2023-07-01 12:15:58,960: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module mstpctl not loaded (module init failed: no /sbin/mstpctl found)
2023-07-01 12:15:58,964: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /bin/ip rule show
2023-07-01 12:15:58,968: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /bin/ip -6 rule show
2023-07-01 12:15:59,119: MainThread: ifupdown.address: address.py:298:__policy_get_default_mtu(): info: address: using default mtu 1500
2023-07-01 12:15:59,119: MainThread: ifupdown.address: address.py:312:__policy_get_max_mtu(): info: address: max_mtu undefined
2023-07-01 12:15:59,119: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /sbin/sysctl net.ipv6.conf.all.accept_ra
2023-07-01 12:15:59,120: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /sbin/sysctl net.ipv6.conf.all.autoconf
2023-07-01 12:15:59,122: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /usr/sbin/ip vrf id
2023-07-01 12:15:59,124: MainThread: ifupdown.dhcp: dhcp.py:55:__init__(): info: mgmt vrf_context = False
2023-07-01 12:15:59,124: MainThread: ifupdown.dhcp: dhcp.py:70:__init__(): debug: dhclient: dhclient_retry_on_failure set to 0
2023-07-01 12:15:59,125: MainThread: ifupdown.addressvirtual: addressvirtual.py:99:__init__(): info: executing /bin/ip addr help
2023-07-01 12:15:59,126: MainThread: ifupdown.addressvirtual: addressvirtual.py:104:__init__(): info: address metric support: OK
2023-07-01 12:15:59,128: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module ppp not loaded (module init failed: no /usr/bin/pon found)
2023-07-01 12:15:59,129: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module mstpctl not loaded (module init failed: no /sbin/mstpctl found)
2023-07-01 12:15:59,130: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module batman_adv not loaded (module init failed: no /usr/sbin/batctl found)
2023-07-01 12:15:59,130: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module openvswitch_port not loaded (module init failed: no /usr/bin/ovs-vsctl found)
2023-07-01 12:15:59,131: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module openvswitch not loaded (module init failed: no /usr/bin/ovs-vsctl found)
2023-07-01 12:15:59,131: MainThread: ifupdown: ifupdownmain.py:1536:load_scripts(): info: looking for user scripts under /etc/network
2023-07-01 12:15:59,131: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-pre-up.d ...
2023-07-01 12:15:59,131: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-up.d ...
2023-07-01 12:15:59,131: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-post-up.d ...
2023-07-01 12:15:59,131: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-pre-down.d ...
2023-07-01 12:15:59,131: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-down.d ...
2023-07-01 12:15:59,132: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-post-down.d ...
2023-07-01 12:15:59,132: MainThread: ifupdown: ifupdownmain.py:396:__init__(): info: using mgmt iface default prefix eth
2023-07-01 12:15:59,132: MainThread: ifupdown: ifupdownmain.py:2004:query(): debug: setting flag ALL
2023-07-01 12:15:59,132: MainThread: ifupdown.networkInterfaces: networkinterfaces.py:506:read_file(): info: processing interfaces file /etc/network/interfaces
2023-07-01 12:15:59,132: MainThread: ifupdown.networkInterfaces: networkinterfaces.py:164:process_source(): debug: processing sourced line ..'source /etc/network/interfaces.d/*'
2023-07-01 12:15:59,133: MainThread: ifupdown2: main.py:85:main(): error: main exception: no ifaces found matching given allow lists
2023-07-01 12:15:59,213: MainThread: ifupdown2: log.py:373:write(): info: exit status 1
 
FYI, ifupdown2 in version 3.2.0-1+pmx3 that got the fixes for that which you sent, is already available on no-subscription, so upgrading should address this too - at least if it isn't another issue.
Unfortunately, this did not solve my problem.

I still have to boot with networking.service disabled and then start the service via crontab @boot to make my system run headless.

As you can see from the last posts in this thread, I am not alone with this problem.
@spirit @davemcl @athurdent
++
 
Last edited:
  • Like
Reactions: athurdent
Unfortunately, this did not solve my problem.

I still have to boot with networking.service disabled and then start the service via crontab @boot to make my system run headless.

As you can see from the last posts in this thread, I am not alone with this problem.
@spirit @davemcl ++

My issue actually got resolved with PVE 8.0.3 - the BETA didnt work for me.
Have now got 8 running on entire cluster.
 
  • Like
Reactions: Chrischi
Unfortunately, this did not solve my problem.

I still have to boot with networking.service disabled and then start the service via crontab @boot to make my system run headless.

As you can see from the last posts in this thread, I am not alone with this problem.
@spirit @davemcl @athurdent
++
Try this: https://forum.proxmox.com/threads/proxmox-ve-8-0-released.129320/page-6#post-567264
Solved it for me and I obviously did also not read the release notes thoroughly. Sorry for the noise.
 
  • Like
Reactions: Chrischi
Last edited:
@t.lamprecht
Shouldn't the installation image also be patched in some way so that we can use this without a workaround?
As mentioned in the reply to the linked post (and some follow-up post), we install chrony as default NTP daemon since years, so they are not affected, or what do you mean?

Also, the upgrade guide explicitly mentions this as known issue w.r.t. ntpsec and some installations, so anybody following it now should be notified in advance.
 
The problem is that if you use vmbr0 + tag 202, proxmox will create a new "vmbr0v202 + ensX.202" interfaces in background, so it'll conflict with your vlan202 device (done at bridge level)

The best/easyway is to enable vlan-aware option on bridge, like this the vm tag is set on bridge port instead physical interfaces.
OK it was actually the 'bridge-vlan-aware yes' option, however as I'm running Mellanox ConnectX3 (128 vlans supported) NIC's default behavior with 'bridge-vids 2-4094' caused vmbrs using Mellanox NIC completely ignore 802.1q tags and broke networking.. I had to namely specify 802.1q tags I'd like to bridge through..

It is working but it is not very flexible in terms of adding new and new vlans to the cluster..
 
Unfortunately, this did not solve my problem.

I still have to boot with networking.service disabled and then start the service via crontab @boot to make my system run headless.

As you can see from the last posts in this thread, I am not alone with this problem.
@spirit @davemcl @athurdent
++

Try
Bash:
systemctl disable ntpsec-systemd-netif.path
systemctl disable ntpsec

Best regards
 
It's a pity ntpsec-ntpdate is stopping boot from working. I can't test this now because I spent the day migrating all of my infrastructure to chrony (and losing the functionality I was relying on the ntp reference implementation for), but this should have a more prominent warning somewhere. I finally found the hint in https://pve.proxmox.com/wiki/Upgrade_from_7_to_8#Network_Fails_on_Boot_Due_to_NTPsec_Hook but I was following https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm which does not contain the warning. In the meantime until this issue is fixed, can proxmox-ve come with a conflicts: ntpsec-ntpdate?
 
and losing the functionality I was relying on the ntp reference implementation for
What specific features did you use that are missing from chrony?
In the meantime until this issue is fixed, can proxmox-ve come with a conflicts: ntpsec-ntpdate?
As not all users are affected by this issue, I'd like to avoid a blanket ban for the use of ntpsec-ntpdate for all.

But I updated the article for installing Proxmox VE on top of Debian 12 to suggest installing chrony when installing proxmox-ve, which alone should cover most people, and I also added the ntpsec hint to the troubleshooting section there too – thanks for your pointer.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!