Proxmox VE 8.0 released!

Great stuff guys!

Just tried to install on a ancient X9DRi-LN4+/X9DR3-LN4+ Supermicro-board, could get both the text-installer nor the GUI-installer to work without using the debug-options and nomodeset. Monitors complained about the resolution.
 
Is it possible to install one v8 and bring it into a cluster than is all 7.3? This way, I could migrate vms away from the 7.3 hosts to upgrade those as well.
 
Is it possible to install one v8 and bring it into a cluster than is all 7.3? This way, I could migrate vms away from the 7.3 hosts to upgrade those as well.
I'm no expert, but i would guess so.
I was able to temporarily run 1-2x v8 nodes in my v7 cluster whilst i was upgrading each.
 
  • Like
Reactions: Proximate
@martin @tom @fiona @t.lamprecht
What about two-step upgrade process recommended by Debian?
Shouldn't Proxmox also follow Debian's recommendation or is a one step upgrade required/recommended?

4.4.5. Minimal system upgrade​

In some cases, doing the full upgrade (as described below) directly might remove large numbers of packages that you will want to keep. We therefore recommend a two-part upgrade process: first a minimal upgrade to overcome these conflicts, then a full upgrade as described in Section 4.4.6, “Upgrading the system”.
To do this, first run:
# apt upgrade --without-new-pkgs

This has the effect of upgrading those packages which can be upgraded without requiring any other packages to be removed or installed.
The minimal system upgrade can also be useful when the system is tight on space and a full upgrade cannot be run due to space constraints.
If the apt-listchanges package is installed, it will (in its default configuration) show important information about upgraded packages in a pager after downloading the packages. Press q after reading to exit the pager and continue the upgrade.

4.4.6. Upgrading the system​

Once you have taken the previous steps, you are now ready to continue with the main part of the upgrade. Execute:
# apt full-upgrade
https://www.debian.org/releases/stable/amd64/release-notes/ch-upgrading.html
 
Last edited:
I notice the latest release is 8.0.2. Does this mean if I upgrade now, there will be many more updates to have to upgrade which would be very disruptive.

It seems like waiting would be more prudent?
 
Upgraded my 3 nodes one at a time with no issues, just followed the guide!

I do however have a Warning in Ceph.
Code:
Module 'restful' has failed dependency: PyO3 modules may only be initialized once per interpreter processModule 'restful' has failed dependency: PyO3 modules may only be initialized once per interpreter process

I'm new to Ceph, and have only recently installed and setup a pool. I currently have nothing actually installed or using the ceph storage. So I'm not sure this is a result of the upgrade or my original install.
 
pve7to8 need to check for installed drivers, I had amdgpu installed that blocked system from booting after upgrade.

Also network interface changed name from enp97s0f0 and enp97s0f1 to enp98s0f0 and enp97s0f1, no idea why, but I had to go into rescue mode and manually change the bridge interface settings

p.s. Not 100 certain, but that also seems like resulted in corrupt AMD GPU state, when I reboot the system that failed to boot 6.12 kernel due to amdgpu drivers, it will fail to POST due to bad PCIe state?...
 
Last edited:
Upgraded my 3 nodes one at a time with no issues, just followed the guide!

I do however have a Warning in Ceph.
Code:
Module 'restful' has failed dependency: PyO3 modules may only be initialized once per interpreter processModule 'restful' has failed dependency: PyO3 modules may only be initialized once per interpreter process

I'm new to Ceph, and have only recently installed and setup a pool. I currently have nothing actually installed or using the ceph storage. So I'm not sure this is a result of the upgrade or my original install.
do you have enabled ceph-dashboard ? (because it seem to be incompatbile with a new python lib pyo3 0.17 version on debian12)
 
do you have enabled ceph-dashboard ? (because it seem to be incompatbile with a new python lib pyo3 0.17 version on debian12)
Ah, yes, I do.
I did try to disable it, but that didn't clear the warning. Will removing it clear the warning?
 
This warning says that your system uses proxmox-boot-tool for booting (which is the case for systems with '/' on ZFS installed by the PVE installer).
In the UEFI case the system uses systemd-boot for booting - see [0]. Until bullseye systemd-boot was part of the systemd main package, with bookworm it became a package of its own (systemd-boot). Since the new package installs hooks and automatically installs systemd-boot on a mounted ESP (which is the case for systems installed with LVM) we did not pull it in unconditionally upon upgrade.
for proxmox-boot-tool the systemd-boot package itself is only needed if you initialize new ESPs (e.g. when changing a faulted disk in your ZFS RAID [1]) - so your system remains bootable even without systemd-boot installed - You can simply install systemd-boot and the warning will vanish

I hope this explains it!

[0] https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot
[1] https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_zfs_administration

I am also seeing this warning. I'm quite new to Proxmox and homelabs. Would you mind elaborating further? I don't quite understand what you mean and what I am supposed to do. I updated everything else before realizing the warning, should I be worried?

Thanks in advance
 
I went through the graphical installer using the proxmox-ve_8.0-2.iso image mounted as a virtual CD to a Dell R740xd via iDrac9. Everything seemed fine as I specified a pair of disks (last pair out of 26 SATA SSDs on the HBA) to setup as ZFS RAID1 and configured the network. The installer went through to the end without error and rebooted to EFI stub: Loaded initrd from LINUX_EFI_INITRD_MEDIA_GUID device path on a black screen. I tried a few times with/without auto-reboot after install and with/without disconnecting the virtual media before rebooting. No change. I also tried on a second identical host with same result. I then tried just installing to the first disk without ZFS and that booted (albeit now with a blue grub screen appearing instead of a black systemd-boot(?) one.)

I have 5 of these identical systems and am in the middle of commissioning a new cluster so I have already installed pve 7.4 using ZFS RAID1 on two of them without issue and then found the pve 8.0 release this evening so I thought I would try it out. Hope this helps to identify problems. Let me know if I should do something else or start a new thread or bug report. My subscriptions have not been activated yet as I'm still in setup and test mode.
 
I went through the graphical installer using the proxmox-ve_8.0-2.iso image mounted as a virtual CD to a Dell R740xd via iDrac9. Everything seemed fine as I specified a pair of disks (last pair out of 26 SATA SSDs on the HBA) to setup as ZFS RAID1 and configured the network. The installer went through to the end without error and rebooted to EFI stub: Loaded initrd from LINUX_EFI_INITRD_MEDIA_GUID device path on a black screen. I tried a few times with/without auto-reboot after install and with/without disconnecting the virtual media before rebooting. No change. I also tried on a second identical host with same result. I then tried just installing to the first disk without ZFS and that booted (albeit now with a blue grub screen appearing instead of a black systemd-boot(?) one.)

I have 5 of these identical systems and am in the middle of commissioning a new cluster so I have already installed pve 7.4 using ZFS RAID1 on two of them without issue and then found the pve 8.0 release this evening so I thought I would try it out. Hope this helps to identify problems. Let me know if I should do something else or start a new thread or bug report. My subscriptions have not been activated yet as I'm still in setup and test mode.
Can I ask what ethernet daughter card or PCI card you have and if its in a bond?
 
What about two-step upgrade process recommended by Debian?
If you want to use that, I'd recommend testing it first in an unimportant setup (e.g., a virtual Proxmox VE instance mirroring roughly what your real setup looks like). Debian's recommendation for some cases, as they write, is a relatively safe thing to do in general, but it's definitively not a must nor a recommendation for PVE hosts, among other things as we did not really test that approach extensively - and note that for the "actual" update you must NOT use the apt upgrade command, but full-upgrade (or its alias dist-upgrade)!

FWIW, it won't gain server systems like Proxmox VE much, and as we're seeing Proxmox projects as their own distros as most of core packages come directly from us, and we test upgrades extensively under the conditions and special needs that our hypervisor stack brings with it..
Shouldn't Proxmox also follow Debian's recommendation or is a one step upgrade required/recommended?
A one step upgrade is not a hard requirement, but what we recommend, especially if unsure, and what we tested most.
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8#In-place_upgrade
 
  • Like
Reactions: kam821
I notice the latest release is 8.0.2. Does this mean if I upgrade now, there will be many more updates to have to upgrade which would be very disruptive.
Yes, we will continue to ship updates for the whole release cycle of Proxmox VE 8, just like we did for Proxmox VE 7, continuosly fixing bugs and also providing more features.
It seems like waiting would be more prudent?
Maybe, but then you'd practically wait until Proxmox VE 8 is EOL in ~three years or so, as only then we will stop releasing updates for this release ;).
If you want to be prudent and get the most stable updates then use the enterprise repositories via a subscription.
 
I am also seeing this warning. I'm quite new to Proxmox and homelabs. Would you mind elaborating further? I don't quite understand what you mean and what I am supposed to do. I updated everything else before realizing the warning, should I be worried?

You can upgrade normally, you don't need to be worried.
After the upgrade you can simply install the systemd-boot package.

On installation a version of the systemd-boot EFI tool was copied over to the ESP (EFI System Partition), so that's why you can still upgrade without any worries, installing the systemd-boot package just ensures that you get future updates to that binary and also that you can initialize new drives as boot devices, for example if one of a rpool root mirror needs to be replaced in the future.
 
Hi !

i have connection probleme since upgrade proxmox 7.4 > 8.0

Code:
2023-06-24T17:23:07.249280+02:00 proxmox kernel: [ 3309.419261] r8169 0000:02:00.0 enp2s0: rtl_ephyar_cond == 1 (loop: 100, delay: 10).
2023-06-24T17:23:07.249285+02:00 proxmox kernel: [ 3309.420982] r8169 0000:02:00.0 enp2s0: rtl_ephyar_cond == 1 (loop: 100, delay: 10).
2023-06-24T17:23:07.253285+02:00 proxmox kernel: [ 3309.422741] r8169 0000:02:00.0 enp2s0: rtl_ephyar_cond == 1 (loop: 100, delay: 10).
2023-06-24T17:23:07.281274+02:00 proxmox kernel: [ 3309.452104] r8169 0000:02:00.0 enp2s0: rtl_eriar_cond == 1 (loop: 100, delay: 100).
2023-06-24T17:23:07.313268+02:00 proxmox kernel: [ 3309.482136] r8169 0000:02:00.0 enp2s0: rtl_eriar_cond == 1 (loop: 100, delay: 100).
2023-06-24T17:23:07.341302+02:00 proxmox kernel: [ 3309.512139] r8169 0000:02:00.0 enp2s0: rtl_eriar_cond == 1 (loop: 100, delay: 100).

no crash. Just no connection ! if i reboot, it's ok but connection is lost after some times
 
Last edited:
I have a very weird networking issue that I cannot solve at the moment. Everything was working fine with this network config in PVE 6+7 but fails in 8.

The weird thing is that the network seems to work fine when booting up and running 'ifreload -a' manually (with some messages).
There are some warning messages but they should not cause this kind of problems.

This machine is a Hetzner AX41-NVMe.
My current workarround is a low timeout for networking.service and a cronjob that triggers after booting up that will run 'ifreload -a'.

Below is the interfaces config that causes the issues already. So what is wrong here?

Code:
source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

iface lo inet6 loopback

auto enp7s0
iface enp7s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 1.2.3.4/26
        gateway 1.2.3.193
        bridge-ports enp7s0
        bridge-stp off
        bridge-fd 0
        hwaddress rr:ii:uu:bb:aa:xx
        up ip route add 1.2.3.1/26 via 1.2.3.193 dev vmbr0
        up ip route add 1.2.2.1/32 dev vmbr0
#VMs

iface vmbr0 inet6 static
        address 2a01:yyy:xxx:fa00::2/64
        gateway fe80::1
        up ip -6 route add 2a01:yyy:xxx:fa00::/56 via 2a01:yyy:xxx:fa00::3

Error messages during boot:

Screenshot 2023-06-24 180236.jpg
While running 'ifreload -a'.

Screenshot 2023-06-24 174846.jpg

Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!