Proxmox VE 8.0 released!

Did the 'pve7to8 --full' on a 3-node Ceph Quincy cluster, no issues were found.

Both PVE and Ceph were upgraded and 'pve7to8 --full' mentioned a reboot was required.

After reboot, got "Ceph got timeout (500)" error.

"ceph -s" shows nothing.

No monitors, no managers, no mds.

Any suggestions on resolving this issue?

Corosync and Ceph are using a full-mesh broadcast network.
 
Last edited:
I have a very weird networking issue that I cannot solve at the moment. Everything was working fine with this network config in PVE 6+7 but fails in 8.

The weird thing is that the network seems to work fine when booting up and running 'ifreload -a' manually (with some messages).
There are some warning messages but they should not cause this kind of problems.

This machine is a Hetzner AX41-NVMe.
My current workarround is a low timeout for networking.service and a cronjob that triggers after booting up that will run 'ifreload -a'.

Below is the interfaces config that causes the issues already. So what is wrong here?

Code:
source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

iface lo inet6 loopback

auto enp7s0
iface enp7s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 1.2.3.4/26
        gateway 1.2.3.193
        bridge-ports enp7s0
        bridge-stp off
        bridge-fd 0
        hwaddress rr:ii:uu:bb:aa:xx
        up ip route add 1.2.3.1/26 via 1.2.3.193 dev vmbr0
        up ip route add 1.2.2.1/32 dev vmbr0
#VMs

iface vmbr0 inet6 static
        address 2a01:yyy:xxx:fa00::2/64
        gateway fe80::1
        up ip -6 route add 2a01:yyy:xxx:fa00::/56 via 2a01:yyy:xxx:fa00::3

Error messages during boot:

View attachment 52110
While running 'ifreload -a'.

View attachment 52111

Thanks!
is your /etc/network/interfaces config complete ? I don't see vmbr4000, vmbr4001,vmbr4002 ...

could be interesting to have the result of "ifreload -a -d" (in debug mode)
 
Hello,

I am trying to install proxmox on a Topton n305 minipc/virtualization/router type appliance. - https://www.aliexpress.us/item/3256805194779007.html?gatewayAdapt=glo2usa4itemAdapt

Before release of 8.0 I could not get 7.4 to boot after install in order to update kernel to 6.2. Once I managed to get past some VGA errors proxmox installed but would boot loop AFER install.

But now I've flashed 8.0-2 to USB as it comes with 6.2 I thought this would resolve the situation. However, when installing proxmox I get to about 50ish percent and it just reboots. I have tried several times now - zfs raid 0 with 1 ssd, zfs raid 1 with 2 ssd, GUI or console installer all with the same result. I even saw zfs as a common thread here, tried on ext4 - for some reason it seems to get further, around 70% or so and reboots as well.
A time or two I have gotten to the end of the install (or so it appears - last step being making it bootable) without automatic reboot checked, but it reboots anyways, and is not bootable from nvme after.

Not sure where to go from here.

Thanks in advance for any help anyone can offer!
 
My monitor keeps flickering and not proceeding after selecting Graphical install for proxmox-ve_8.0-2 .. I tried to use console mode, it's stuck in blank screen. I tried ProxMox 7 iso and it works, so seems something got broken in 8.0? My system is HP Z840

Update: Selecting
nomodeset mode in the advanced installation screen and also adding it to grub (login screen after install was not showing also, luckily had SSH working, used that to update the /etc/default/grub) after installation fixed this.
 
Last edited:
is your /etc/network/interfaces config complete ? I don't see vmbr4000, vmbr4001,vmbr4002 ...

could be interesting to have the result of "ifreload -a -d" (in debug mode)
I removed everything that isn't relevant for the moment since the network service seems to have issues with the host config itself.
I did some more tests. When removing the bridges the system seems to boot fine. (IP assigned to nic directly).

Here is the debug log. Though it (ifupdown2) seems to work fine. It's just the system that doesn't bring up the interfaces during boot time.
I am wondering if the ntpsec script has anything to do with it.

Code:
# Interfaces BEFORE ifreload -a -d


1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp7s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether rr:ii:uu:bb:aa:xx brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether rr:ii:uu:bb:aa:xx brd ff:ff:ff:ff:ff:ff
 
 
# Output of ifreload -a -d

debug: args = Namespace(all=True, currentlyup=False, CLASS=None, iflist=[], noact=False, verbose=False, debug=True, withdepends=False, perfmode=False, nocac
he=False, excludepats=None, usecurrentconfig=False, syslog=False, systemd=False, force=False, syntaxcheck=False, version=None, nldebug=False)
debug: creating ifupdown object ..
info: requesting link dump
info: requesting address dump
info: requesting netconf dump
debug: nlcache: reset errorq
debug: {'enable_persistent_debug_logging': 'yes', 'use_daemon': 'no', 'template_enable': '1', 'template_engine': 'mako', 'template_lookuppath': '/etc/networ
k/ifupdown2/templates', 'default_interfaces_configfile': '/etc/network/interfaces', 'disable_cli_interfacesfile': '0', 'addon_syntax_check': '0', 'addon_scr
ipts_support': '1', 'addon_python_modules_support': '1', 'multiple_vlan_aware_bridge_support': '1', 'ifquery_check_success_str': 'pass', 'ifquery_check_erro
r_str': 'fail', 'ifquery_check_unknown_str': '', 'ifquery_ifacename_expand_range': '0', 'link_master_slave': '1', 'delay_admin_state_change': '0', 'ifreload
_down_changed': '0', 'addr_config_squash': '0', 'ifaceobj_squash': '0', 'adjust_logical_dev_mtu': '1', 'state_dir': '/run/network/'}
info: loading builtin modules from ['/usr/share/ifupdown2/addons']
info: module openvswitch not loaded (module init failed: no /usr/bin/ovs-vsctl found)
info: module openvswitch_port not loaded (module init failed: no /usr/bin/ovs-vsctl found)
info: module ppp not loaded (module init failed: no /usr/bin/pon found)
info: module batman_adv not loaded (module init failed: no /usr/sbin/batctl found)
debug: bridge: using reserved vlan range (0, 0)
debug: bridge: init: warn_on_untagged_bridge_absence=False
debug: bridge: init: vxlan_bridge_default_igmp_snooping=None
debug: bridge: init: arp_nd_suppress_only_on_vxlan=False
debug: bridge: init: bridge_always_up_dummy_brport=None
info: executing /sbin/sysctl net.bridge.bridge-allow-multiple-vlans
debug: bridge: init: multiple vlans allowed True
info: module mstpctl not loaded (module init failed: no /sbin/mstpctl found)
info: executing /bin/ip rule show
info: executing /bin/ip -6 rule show
info: address: using default mtu 1500
info: address: max_mtu undefined
info: executing /sbin/sysctl net.ipv6.conf.all.accept_ra
info: executing /sbin/sysctl net.ipv6.conf.all.autoconf
info: executing /usr/sbin/ip vrf id
info: mgmt vrf_context = False
debug: dhclient: dhclient_retry_on_failure set to 0
info: executing /bin/ip addr help
info: address metric support: OK
info: module ppp not loaded (module init failed: no /usr/bin/pon found)
info: module mstpctl not loaded (module init failed: no /sbin/mstpctl found)
info: module batman_adv not loaded (module init failed: no /usr/sbin/batctl found)
info: module openvswitch_port not loaded (module init failed: no /usr/bin/ovs-vsctl found)
info: module openvswitch not loaded (module init failed: no /usr/bin/ovs-vsctl found)
info: looking for user scripts under /etc/network
info: loading scripts under /etc/network/if-pre-up.d ...
info: loading scripts under /etc/network/if-up.d ...
info: loading scripts under /etc/network/if-post-up.d ...
info: loading scripts under /etc/network/if-pre-down.d ...
info: loading scripts under /etc/network/if-down.d ...
info: loading scripts under /etc/network/if-post-down.d ...
info: 'link_master_slave' is set. slave admin state changes will be delayed till the masters admin state change.
info: using mgmt iface default prefix eth
debug: reloading interface config ..
info: processing interfaces file /etc/network/interfaces
debug: processing sourced line ..'source /etc/network/interfaces.d/*'
debug: vmbr0: evaluating port expr '['enp7s0']'
info: reload: scheduling up on interfaces: ['lo', 'enp7s0', 'vmbr0']
debug: scheduling '['pre-up', 'up', 'post-up']' for ['lo', 'enp7s0', 'vmbr0']
debug: dependency graph {
        lo : []
        enp7s0 : []
        vmbr0 : ['enp7s0']
}
debug: graph roots (interfaces that dont have dependents): ['lo', 'vmbr0']
info: lo: running ops ...
debug: lo: pre-up : running module xfrm
debug: lo: pre-up : running module link
debug: lo: pre-up : running module bond
debug: lo: pre-up : running module vlan
debug: lo: pre-up : running module vxlan
debug: lo: pre-up : running module usercmds
debug: lo: pre-up : running module bridge
debug: lo: pre-up : running module bridgevlan
debug: lo: pre-up : running module tunnel
debug: lo: pre-up : running module vrf
debug: lo: pre-up : running module ethtool
debug: lo: pre-up : running module auto
debug: lo: pre-up : running module address
info: executing /sbin/sysctl net.mpls.conf.lo.input=0
info: executing /sbin/sysctl net.ipv6.conf.lo.accept_ra=0
info: executing /sbin/sysctl net.ipv6.conf.lo.autoconf=0
debug: lo: up : running module dhcp
debug: lo: up : running module address
debug: lo: up : running module addressvirtual
debug: lo: up : running module usercmds
debug: lo: up : running script /etc/network/if-up.d/postfix
info: executing /etc/network/if-up.d/postfix
debug: lo: up : running script /etc/network/if-up.d/ntpsec-ntpdate
info: executing /etc/network/if-up.d/ntpsec-ntpdate
debug: lo: post-up : running module usercmds
debug: lo: statemanager sync state pre-up
debug: vmbr0: found dependents ['enp7s0']
info: enp7s0: running ops ...
debug: enp7s0: pre-up : running module xfrm
debug: enp7s0: pre-up : running module link
debug: enp7s0: pre-up : running module bond
debug: enp7s0: pre-up : running module vlan
debug: enp7s0: pre-up : running module vxlan
debug: enp7s0: pre-up : running module usercmds
debug: enp7s0: pre-up : running module bridge
info: enp7s0: not enslaved to bridge vmbr0: ignored for now
debug: enp7s0: pre-up : running module bridgevlan
debug: enp7s0: pre-up : running module tunnel
debug: enp7s0: pre-up : running module vrf
debug: enp7s0: pre-up : running module ethtool
debug: enp7s0: pre-up : running module auto
debug: enp7s0: pre-up : running module address
info: executing /sbin/sysctl net.mpls.conf.enp7s0.input=0
debug: enp7s0: up : running module dhcp
debug: enp7s0: up : running module address
debug: enp7s0: up : running module addressvirtual
debug: enp7s0: up : running module usercmds
debug: enp7s0: up : running script /etc/network/if-up.d/postfix
info: executing /etc/network/if-up.d/postfix
debug: enp7s0: up : running script /etc/network/if-up.d/ntpsec-ntpdate
info: executing /etc/network/if-up.d/ntpsec-ntpdate
debug: enp7s0: post-up : running module usercmds
debug: enp7s0: statemanager sync state pre-up
info: vmbr0: running ops ...
debug: vmbr0: pre-up : running module xfrm
debug: vmbr0: pre-up : running module link
debug: vmbr0: pre-up : running module bond
debug: vmbr0: pre-up : running module vlan
debug: vmbr0: pre-up : running module vxlan
debug: vmbr0: pre-up : running module usercmds
debug: vmbr0: pre-up : running module bridge
info: vmbr0: bridge already exists
info: vmbr0: applying bridge settings
info: vmbr0: set bridge-fd 0 (cache 1500)
info: vmbr0: reset bridge-hashel to default: 4
info: vmbr0: reset bridge-hashmax to default: 512
info: reading '/sys/class/net/vmbr0/bridge/stp_state'
info: vmbr0: netlink: ip link set dev vmbr0 type bridge (with attributes)
debug: attributes: {1: 0, 26: 4, 27: 512}
debug: vmbr0: evaluating port expr '['enp7s0']'
info: writing '1' to file /proc/sys/net/ipv6/conf/enp7s0/disable_ipv6
info: executing /bin/ip -force -batch - [link set dev enp7s0 master vmbr0]
info: vmbr0: applying bridge port configuration: ['enp7s0']
info: vmbr0: applying bridge configuration specific to ports
info: vmbr0: processing bridge config for port enp7s0
info: executing /bin/ip -force -batch - [link set dev enp7s0 up]
debug: vmbr0: pre-up : running module bridgevlan
debug: vmbr0: pre-up : running module tunnel
debug: vmbr0: pre-up : running module vrf
debug: vmbr0: pre-up : running module ethtool
debug: vmbr0: pre-up : running module auto
debug: vmbr0: pre-up : running module address
info: executing /sbin/sysctl net.mpls.conf.vmbr0.input=0
info: vmbr0: netlink: ip link set dev vmbr0 address rr:ii:uu:bb:aa:xx
info: vmbr0: netlink: ip addr add 1.2.3.4/26 dev vmbr0
info: writing '0' to file /proc/sys/net/ipv4/conf/vmbr0/arp_accept
info: vmbr0: netlink: ip link set dev vmbr0 up
debug: vmbr0: up : running module dhcp
debug: vmbr0: up : running module address
info: executing /bin/ip route replace default via 1.2.3.193 proto kernel dev vmbr0 onlink
debug: vmbr0: up : running module addressvirtual
debug: vmbr0: up : running module usercmds
debug: vmbr0: up : running script /etc/network/if-up.d/postfix
info: executing /etc/network/if-up.d/postfix
debug: vmbr0: up : running script /etc/network/if-up.d/ntpsec-ntpdate
info: executing /etc/network/if-up.d/ntpsec-ntpdate
debug: vmbr0: post-up : running module usercmds
info: executing ip route add 1.2.3.1/26 via 1.2.3.193 dev vmbr0
info: executing ip route add 1.2.2.1/32 dev vmbr0
debug: vmbr0: statemanager sync state pre-up
debug: saving state ..
info: exit status 0


# Interfaces AFTER ifreload -a -d

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether rr:ii:uu:bb:aa:xx brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether rr:ii:uu:bb:aa:xx brd ff:ff:ff:ff:ff:ff
    inet 1.2.3.4/26 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::yyy:xxx.../64 scope link
       valid_lft forever preferred_lft forever


#######
# journalctl -u networking.service

-- Boot 472544a572114ac3bb34c6c281aab866 --
Jun 24 23:20:55 hv3 systemd[1]: Starting networking.service - Network initialization...
Jun 24 23:20:55 hv3 networking[750]: networking: Configuring network interfaces
Jun 24 23:21:05 hv3 systemd[1]: networking.service: start operation timed out. Terminating.
Jun 24 23:21:05 hv3 systemd[1]: networking.service: Main process exited, code=killed, status=15/TERM
Jun 24 23:21:35 hv3 systemd[1]: networking.service: State 'final-sigterm' timed out. Killing.
Jun 24 23:21:35 hv3 systemd[1]: networking.service: Killing process 830 (python3) with signal SIGKILL.
Jun 24 23:21:35 hv3 systemd[1]: networking.service: Failed with result 'timeout'.
Jun 24 23:21:35 hv3 systemd[1]: Failed to start networking.service - Network initialization.

Update:
Removing the script '/etc/network/if-up.d/ntpsec-ntpdate' actually fixes the interface loading issue. I read that there are changes but what the heck does this change so all interfaces fail during boot time?

Update 2:
The issue is related to ntpsec. Once I purged the ntpsec package and installed chrony booting works fine, even with the ntpsec-ntpdate script in place. I am still wondering what the issue is though.
I guess that this issue might have been enough to switch completely to chrony on all systems. NTP/NTPSEC had it's share of issues in the past already and this problem here supports the switch.
 
Last edited:
Can I ask what ethernet daughter card or PCI card you have and if its in a bond?
Yes. The systems I am setting up all have Intel(R) Ethernet 10G 4P X710 SFP+ rNDC daughter cards and Intel(R) Ethernet Server Adapter X520-2 PCIe cards for a total of 6 interfaces. All interfaces were connected by DAC cables to Cisco Nexus 5548UP switches. The switches were configured to leave eno1 (first interface on the daughter card) as a normal vlan access port to bond the other 5 interfaces. The idea was do make it through the installer on eno1 as normal and then switch over to an openvswitch interfaces file implementing the bond later.

Based on your question, I removed all the bonding configuration on the Cisco for those extra 5 ports and tried the install again. Same result. I then tried with the proxmox-ve_7.4-1.iso with the same settings for the installer and it was successful. So it would seem that something changed in the 8.0 installer that causes issues with ZFS boot.
 
  • Like
Reactions: davemcl
the question of changing the configuration property in /etc/lvm/lvm.conf yes / no what was the answer? I didn’t climb there with my hands, but the fact that he wants to replace all the lines with # is normal?
 
the question of changing the configuration property in /etc/lvm/lvm.conf yes / no what was the answer? I didn’t climb there with my hands, but the fact that he wants to replace all the lines with # is normal?

/etc/lvm/lvm.conf -> Changes relevant for Proxmox VE will be updated, and a newer config version might be useful.
If you did not make extra changes yourself and are unsure it's suggested to choose "Yes" (install the package maintainer's version) here.
https://pve.proxmox.com/wiki/Upgrad..._system_to_Debian_Bookworm_and_Proxmox_VE_8.0
 
  • Like
Reactions: t.lamprecht
War bisher nicht nötig, da ich das gute Stück erst (mit aktuellem BIOS) vor gut 6 Monaten in Betrieb genommen habe.
BIOS-Updates sind bei sowas halt immer so eine Sache. Ich verfahre hier meist nach dem Motto: keine Probleme und/oder HW-Veränderungen und keine gravierenden Sicherheitslücken, dann bleibt alles wie es ist ;)
 
I upgraded two maschines in my advanced homelab setup without any issues. These are:

1) A server maschine with 2 Socket Xeon 2967v2, 256 GiB RAM, 1x1TiB RAID1 and 1x5TiB RAID5 on Adaptec 5085 Controller, Asus Z9PE-D16 Mainboard, Ivy Bridge Architecture. I have 14 VMs on this.

2) A Minisforum JB95 with Intel Celeron N5095 (4C) with 32 GiB RAM, 2x local SATA3 SSD, booting Win11 and Debian12 XFCE Desktop with added Proxmox 8 in a multiboot setup. I have 4 LXC containers and 3 VMs on this box.

Everything just works like a charm. I am very happy about this setup and about Proxmox, of course.

1687693145058.png1687693184752.png1687693209807.png1687693245118.png
 
Update:
Removing the script '/etc/network/if-up.d/ntpsec-ntpdate' actually fixes the interface loading issue. I read that there are changes but what the heck does this change so all interfaces fail during boot time?

Update 2:
The issue is related to ntpsec. Once I purged the ntpsec package and installed chrony booting works fine, even with the ntpsec-ntpdate script in place. I am still wondering what the issue is though.
I guess that this issue might have been enough to switch completely to chrony on all systems. NTP/NTPSEC had it's share of issues in the past already and this problem here supports the switch.


thanx for this hint.
My old ProLiant DL360p Gen8 Lab Server did not came up after the upgrade to Proxmox 8. It just got stuck during the booting process. Booting in Recovery Mode worked, but no network.

After removing the ntpsec package as you commented, did the trick. I dont know why ntpsec was installed. Chrony is doing a better job as far as I can tell from my experience. maybe a Proxmox Dev has more insight and can explain it. I would propose to add the findings from @DerDanilo to the Proxmox Wiki.
 
After removing the ntpsec package as you commented, did the trick. I dont know why ntpsec was installed. Chrony is doing a better job as far as I can tell from my experience. maybe a Proxmox Dev has more insight and can explain it.
We recommend, and also install chrony by default since Proxmox VE 7.0, before that it was systemd-timesyncd for a while, which was deemed unsuitable for a server environment.

The issue with NTPsec, which is a relatively modern rewrite of ntp, is a bit weird though as their script - while a bit dated, does the most relevant thing in background (line 25 to the end is wrapped in ()&, i.e., a subshell executed in the background). Also, it did not change in the last 4 years at all. So I don't think NTPsec is directly to blame, just that something changed failing (probably) the udev+wait step. chrony has a similar (in intention) script, but no udev hook/waiting.

even with the ntpsec-ntpdate script in place. I am still wondering what the issue is though.

Well, the script checks quite early if ntpsec is even still installed and does an early exit otherwise, so that's expected as it basically is a no-op then.

For now, we indeed have added a pointer for this to the upgrade guide - many thanks for reporting this to the whole community!
 
You can upgrade normally, you don't need to be worried.
After the upgrade you can simply install the systemd-boot package.

Got the same error message. How do you install systemd-boot ? Can i just install it with apt install systemd-boot ? Do i need to configure anything after that?

Just want to make sure I do it correctly. btw, all my vms are not able to get ip addresses from the dhcp server anymore. not sure if its related to this or not.
 
Last edited:
Did the 'pve7to8 --full' on a 3-node Ceph Quincy cluster, no issues were found.

Both PVE and Ceph were upgraded and 'pve7to8 --full' mentioned a reboot was required.

After reboot, got "Ceph got timeout (500)" error.

"ceph -s" shows nothing.

No monitors, no managers, no mds.

Any suggestions on resolving this issue?

Corosync and Ceph are using a full-mesh broadcast network.
My next step was to re-create the monitors manually by disabling the service and removing /var/lib/ceph/mon/<hostname> directory.

Then ran 'pveceph mon create'. After awhile it timed-out. Running 'journalctl on the failed monitor service shows the following:

Jun 25 13:29:03 pve-test-7-to-8 systemd[1]: Started ceph-mon@pve-test-7-to-8.service - Ceph cluster monitor daemon.
Jun 25 13:29:04 pve-test-7-to-8 ceph-mon[8161]: *** Caught signal (Illegal instruction) **
Jun 25 13:29:04 pve-test-7-to-8 ceph-mon[8161]: in thread 7fe8c0b1da00 thread_name:ceph-mon
Jun 25 13:29:04 pve-test-7-to-8 ceph-mon[8161]: ceph version 17.2.6 (810db68029296377607028a6c6da1ec06f5a2b27) quincy (stable)
Jun 25 13:29:04 pve-test-7-to-8 ceph-mon[8161]: 1: /lib/x86_64-linux-gnu/libc.so.6(+0x3bf90) [0x7fe8c11bdf90]
...
Jun 25 13:29:55 pve-test-7-to-8 ceph-mon[9402]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Jun 25 13:29:55 pve-test-7-to-8 systemd[1]: ceph-mon@pve-test-7-to-8.service: Main process exited, code=killed, status=4/ILL
Jun 25 13:29:55 pve-test-7-to-8 systemd[1]: ceph-mon@pve-test-7-to-8.service: Failed with result 'signal'.
Jun 25 13:30:05 pve-test-7-to-8 systemd[1]: ceph-mon@pve-test-7-to-8.service: Scheduled restart job, restart counter is at 6.
Jun 25 13:30:05 pve-test-7-to-8 systemd[1]: Stopped ceph-mon@pve-test-7-to-8.service - Ceph cluster monitor daemon.
Jun 25 13:30:05 pve-test-7-to-8 systemd[1]: ceph-mon@pve-test-7-to-8.service: Start request repeated too quickly.
Jun 25 13:30:05 pve-test-7-to-8 systemd[1]: ceph-mon@pve-test-7-to-8.service: Failed with result 'signal'.
Jun 25 13:30:05 pve-test-7-to-8 systemd[1]: Failed to start ceph-mon@pve-test-7-to-8.service - Ceph cluster monitor daemon.

Seems to point to a corrupt binary, compile, or something else. No idea.

Going to do a clean install of Proxmox 8 and see if I get the same error when manually creating the monitors.

I'm testing the migration script on a test cluster which does have quorum and nodes can ping each other.

This is why you always test before pushing it to production.
 
Update verlief auf meinem 12er NUC (i7) absolut problemlos.
Keine Probleme bisher, läuft alles wie es soll.
Habe gerade meinen NUC (12 pro i3) inkl. aktuellem Bios (https://www.intel.com/content/www/us/en/download/739909/bios-update-wsadl357.html) updated. Die VMs (Debian 12 und Opnsense 23.1) nutzen nun auch x86-64-v2-AES.
Hat alles super funktioniert. Läuft stabil und ohne Errors. Danke an die Community und an das Proxmox Team. Ihr leistet fantastische Arbeit!
 
Last edited:
  • Like
Reactions: pschneider1968
My monitor keeps flickering and not proceeding after selecting Graphical install for proxmox-ve_8.0-2 .. I tried to use console mode, it's stuck in blank screen. I tried ProxMox 7 iso and it works, so seems something got broken in 8.0? My system is HP Z840

Update: Selecting
nomodeset mode in the advanced installation screen and also adding it to grub (login screen after install was not showing also, luckily had SSH working, used that to update the /etc/default/grub) after installation fixed this.
I have a customer with same problem with old HP G5 / G2
https://bugzilla.proxmox.com/show_bug.cgi?id=4794

nomodeset is fixing too. (but thanks for the note to set it in grub, I think the installer should be updated to auto install in grub indeed)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!