PVE 8 to 9 In-place Upgrade

gfngfn256

Distinguished Member
Mar 29, 2023
2,737
894
153
I’d like to chronicle my PVE 8to9 upgrade experience, which due to the holidays only began yesterday. Maybe others will gain insight from this, or be able to shed further insight to me on how to correct things / do better etc.

Firstly, let me start by thanking all members of this forum, for their continuous invaluable input, but mostly I’d like to thank the absolutely awesome Proxmox team, for their great product & their timely care & attention to detail.

Let me start off by saying, that I started the update on a home-node single server (non-cluster) so as to get my hands wet! It runs on a non-zfs plain LVM setup (NVMe), plus another single SSD as ext4 setup as a Directory storage in PVE – fully updated on no-subscription.

I then proceeded to power down the server & dd image the main host drive (I always do this on any major updates) – for as you know there is no in-house full PVE host backup, which should be mandatory for such an upgrade.

Opened the wiki (https://pve.proxmox.com/wiki/Upgrade_from_8_to_9) in a browser.

Running the pve8to9 --full produced 3 warnings & no failures:

WARN: 1 running guest(s) detected - consider migrating or stopping them.
– I stopped this VM.

WARN: Deprecated config '/etc/sysctl.conf' contains settings - move them to a dedicated file in '/etc/sysctl.d/'.
– These settings are for IPV4/6 forwarding (used in an LXC) so I decided to store this file as a backup for later use, so I issued: mv /etc/sysctl.conf /etc/sysctl.conf.bak (this may have been a bad choice – see below on the LXC issue).

WARN: The matching CPU microcode package 'intel-microcode' could not be found! Consider installing it to receive the latest security and bug fixes for your CPU.
Ensure you enable the 'non-free-firmware' component in the apt sources and run:
apt install intel-microcode
– I chose to ignore this, I’ve never bothered with microcode if it wasn’t necessary (at home).

I then proceeded to change to the correct apt sources for Trixie etc. messing for the first time with the new deb822 format. (I actually like it – instead of those 1 liners!). As sources on a system (a number of years old) get rather convoluted & generally messy – I did some general overdue housekeeping here. Checked apt update & policy – all good - eventually.

Then in my SSH session I opened a tmux terminal (in case I loose connection during the upgrade – this I do by nature when using any potential lengthy command that has potential of going south midway!), & began the apt dist-upgrade which chugged along nicely.

I received a choice option of keeping the current version or updating to a newer version on the following:

/etc/chrony/chrony.conf
/etc/nut/nut.conf
/etc/nut/upsmon.conf
/etc/nut/ups.conf
/etc/smartd.conf
/etc/nut/upsd.conf
/etc/nut/upsd.users
/etc/lvm/lvm.conf (never made changes – but I’m guessing Proxmox may have, see the above wiki).

I Chose Yes to all the above (get the newer version) as I have copies of all those changes there & can easily recreate – as I eventually did.

I then again ran pve8to9 –full & received this:

FAIL: systemd-boot meta-package installed this will cause issues on upgrades of boot-related packages. Install 'systemd-boot-efi' and 'systemd-boot-tools' explicitly and remove 'systemd-boot'

I researched this a little & then did (hopefully the right choice, but probably not?):

apt remove systemd-boot & checking that the other 2 packages are installed (they were).

Then came the moment of truth! Yes the dreaded reboot was issued & yup it failed – booted into BIOS! At that moment I kicked myself—dreading a complete reimaging of the host drive (dd above).

I then decided that if I will anyway have to eventually reimage the host drive, I may as well get the PVE9 Iso & try installing to get a feel of the land. After booting the server with the PVE9 Iso – another thought struck me of trying the rescue-mode in Advanced options – to see if I could coax some life into my server. Surprise, surprise it fully booted including the Home Assistant VM set to run at boot!

I then ran:
Code:
umount /boot/efi

proxmox-boot-tool reinit

Removed the ISO - & yes REBOOTED SUCCESS! So I finally have a running PVE9 node at home.

I then proceeded to change the settings in the above mentioned files as required (chrony, nut etc.).

All went successfully except for CUPS which absolutely flooded the journal/system log with:

Code:
kernel: audit: type=1400 audit(1755543458.357:3138): apparmor="DENIED" operation="create" class="net" info="failed protocol match" error=-13 profile="/usr/sbin/cupsd" pid=17322 comm="cupsd" family="unix" sock_type="stream" protocol=0 requested="create" denied="create" addr=none

With a pointer from the above wiki, & a lot of messing around, I eventually edited /etc/apparmor.d/usr.sbin.cupsd & /etc/apparmor.d/usr.sbin.cups-browsed with abi <abi/3.0>, at the beginning & then issued apparmor_parser -r for both these profiles.
This seems to have quieted down the logs, although it seems they remain noisy if a print job is sent but the printer is turned off. I’ll have to look into that sometime.

I tested all my VMs & LXCs running & all seems well. I then performed backups on all as a test & also (as a test) restored a Debian server Cloud-init template (VM) & did a full clone which worked flawlessly (I was pleasantly slightly surprised!).

The only exception is a Tailscale LXC (on Alpine) that I have, that did boot fine but would not work as a Subnet router / accept-routes (used to access other clients on the network, see https://tailscale.com/kb/1019/subnets ). This is probably a Debian Trixie issue with the sysctl.conf being removed. I then tried cp sysctl.conf.bak sysctl.d/99-sysctl.conf & a lot of other messing, and even though it appears that IPV4/6 forwarding is active on the host, it still would not work.

I subsequently spun up a fresh Debian 12 server VM (from above cloud-init template VM) & installed Tailscale on it & it worked first time. I usually prefer LXCs for these lighter tasks (my above original Alpine LXC for Tailscale, which has worked for years, backs up to about 20MB!) but I’ll have to make do for the present.

In general, I find PVE9 more snappier in the GUI vs PVE8, including a faster boot, and can confirm that the package/core temps of the host are lower, also mirrored by an average of 18w vs 20w power used (as monitored from Home Assistant).

So again kudos to the awesome team – for this magnificent (not uneventful) ride.



On a sidenote: when running:
Code:
~# apt autoremove --dry-run

REMOVING:
  libabsl20220623    libpython3.11  mokutil
  libavif15          libqpdf29      proxmox-kernel-6.8.12-12-pve-signed
  libdav1d6          librav1e0      shim-helpers-amd64-signed
  libpaper1          libsvtav1enc1  shim-signed
  libpoppler-cpp0v5  libutempter0   shim-signed-common
  libpoppler126      libx265-199    shim-unsigned

Summary:
  Upgrading: 0, Installing: 0, Removing: 18, Not Upgrading: 0
Remv libabsl20220623 [20220623.1-1+deb12u2]
Remv libavif15 [0.11.1-1+deb12u1]
Remv libdav1d6 [1.0.0-2+deb12u1]
Remv libpaper1 [1.1.29]
Remv libpoppler-cpp0v5 [22.12.0-2+deb12u1]
Remv libpoppler126 [22.12.0-2+deb12u1]
Remv libpython3.11 [3.11.2-6+deb12u6]
Remv libqpdf29 [11.3.0-1+deb12u1]
Remv librav1e0 [0.5.1-6]
Remv libsvtav1enc1 [1.4.1+dfsg-1]
Remv libutempter0 [1.2.1-4]
Remv libx265-199 [3.5-2+b1]
Remv shim-signed [1.47+pmx1+15.8-1+pmx1]
Remv shim-signed-common [1.47+pmx1+15.8-1+pmx1]
Remv mokutil [0.7.2-1]
Remv proxmox-kernel-6.8.12-12-pve-signed [6.8.12-12]
Remv shim-helpers-amd64-signed [1+15.8+1+pmx1]
Remv shim-unsigned [15.8-1+pmx1]

I’m not sure why the upgrade-installer should not automatically do this, & also remove all older kernels before 6.14.x which as far as I know cannot be run on PVE9 anyway. Maybe there is a way of reverting back to PVE8 (with a pve9to8 script);)!
 
Last edited:
Thanks for the feedback and the report!
I then again ran pve8to9 –full & received this:

FAIL: systemd-boot meta-package installed this will cause issues on upgrades of boot-related packages. Install 'systemd-boot-efi' and 'systemd-boot-tools' explicitly and remove 'systemd-boot'

I researched this a little & then did (hopefully the right choice, but probably not?):

apt remove systemd-boot & checking that the other 2 packages are installed (they were).

Then came the moment of truth! Yes the dreaded reboot was issued & yup it failed – booted into BIOS! At that moment I kicked myself—dreading a complete reimaging of the host drive (dd above).
We improved the check for the boot-loader issues (but it's still not available on our public repositories) - https://lore.proxmox.com/pve-devel/20250814120807.2653672-1-s.ivanov@proxmox.com/T/#t

I'd guess that this version should have caught the issue you ran into - but to make sure we're not missing another edge-case - could you please post:
* /var/log/apt/term.log (or the rotated variant that covers the dist-upgrade to 9)
* `proxmox-boot-tool status`
* `dmesg |grep secure` # this one to see if the system is using secureboot
* `mount |grep -i efi`
* `dpkg -l |grep -ei 'systemd-boot|grub'`

Thanks!
 
  • Like
Reactions: gfngfn256
* /var/log/apt/term.log (or the rotated variant that covers the dist-upgrade to 9)
I have attached the /var/log/apt/term.log which I have edited to start with the actual update that took place on 2025-08-18 & also redacted the node name to 'REDACTED'.

* `proxmox-boot-tool status`
Here:
Code:
~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
0EB4-B1FB is configured with: grub (versions: 6.14.8-2-pve, 6.8.12-13-pve)

* `dmesg |grep secure` # this one to see if the system is using secureboot
No output. This is to be expected as secure-boot is disabled in BIOS.

* `mount |grep -i efi`
Here:
Code:
~# mount |grep -i efi
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
/dev/nvme0n1p2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)

* `dpkg -l |grep -ei 'systemd-boot|grub'`
That should probably be grep -Ei instead of grep -ei. So here:
Code:
~# dpkg -l | grep -Ei 'systemd-boot|grub'
ii  grub-common                          2.12-9+pmx2                         amd64        GRand Unified Bootloader (common files)
ii  grub-efi-amd64                       2.12-9+pmx2                         amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 version)
ii  grub-efi-amd64-bin                   2.12-9+pmx2                         amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 modules)
ii  grub-efi-amd64-unsigned              2.12-9+pmx2                         amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 images)
rc  grub-pc                              2.06-13+pmx1                        amd64        GRand Unified Bootloader, version 2 (PC/BIOS version)
ii  grub-pc-bin                          2.12-9+pmx2                         amd64        GRand Unified Bootloader, version 2 (PC/BIOS modules)
ii  grub2-common                         2.12-9+pmx2                         amd64        GRand Unified Bootloader (common files for version 2)
rc  systemd-boot                         257.7-1+pmx2                        amd64        simple UEFI boot manager - integration and services
ii  systemd-boot-efi:amd64               257.7-1                             amd64        simple UEFI boot manager - EFI binaries
ii  systemd-boot-tools                   257.7-1                             amd64        simple UEFI boot manager - tools

I would also like to add the output of efibootmgr -v, which seems to now include more data than before
Code:
~# efibootmgr -v
BootCurrent: 0000
Timeout: 1 seconds
BootOrder: 0000,0002
Boot0000* proxmox       HD(2,GPT,9cbeef3c-23f6-4dec-89ca-dc71a7bf1903,0x800,0x200000)/File(\EFI\proxmox\grubx64.efi)
      dp: 04 01 2a 00 02 00 00 00 00 08 00 00 00 00 00 00 00 00 20 00 00 00 00 00 3c ef be 9c f6 23 ec 4d 89 ca dc 71 a7 bf 19 03 02 02 / 04 04 36 00 5c 00 45 00 46 00 49 00 5c 00 70 00 72 00 6f 00 78 00 6d 00 6f 00 78 00 5c 00 67 00 72 00 75 00 62 00 78 00 36 00 34 00 2e 00 65 00 66 00 69 00 00 00 / 7f ff 04 00
Boot0002* UEFI OS       HD(2,GPT,9cbeef3c-23f6-4dec-89ca-dc71a7bf1903,0x800,0x200000)/File(\EFI\BOOT\BOOTX64.EFI)0000424f
      dp: 04 01 2a 00 02 00 00 00 00 08 00 00 00 00 00 00 00 00 20 00 00 00 00 00 3c ef be 9c f6 23 ec 4d 89 ca dc 71 a7 bf 19 03 02 02 / 04 04 30 00 5c 00 45 00 46 00 49 00 5c 00 42 00 4f 00 4f 00 54 00 5c 00 42 00 4f 00 4f 00 54 00 58 00 36 00 34 00 2e 00 45 00 46 00 49 00 00 00 / 7f ff 04 00
    data: 00 00 42 4f
Compare to this post of mine. I suspect that my rebooting issue on PVE9 upgrade may have been linked to that thread. So maybe take a look at that thread.

Thanks.
 

Attachments

Thanks for the information!

~# proxmox-boot-tool status Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. System currently booted with uefi 0EB4-B1FB is configured with: grub (versions: 6.14.8-2-pve, 6.8.12-13-pve)
your system is using proxmox-boot-tool - in that case keeping the ESP mounted might be problematic/the cause of the failing boot:
~# mount |grep -i efi efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
/dev/nvme0n1p2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
I'd probably remove the /boot/efi mount from /etc/fstab, run proxmox-boot-tool reinit to get the system in a consistent state!
(Keep in mind that changing the boot-loader configuration, can lead to a unbootable system - so plan for a bit of downtime)

I hope this helps!
 
Hi everybody.
@gfngfn256 I faced a similar issue during today's update, and you definitely saved my life!
My setup is similar: one node non-HA, ZFS mirror boot pool.

After issuing the pve8to9 --full cmd I received the same warning...
Code:
FAIL: systemd-boot meta-package installed this will cause issues on upgrades of boot-related packages.
Install 'systemd-boot-efi' and 'systemd-boot-tools' explicitly and remove 'systemd-boot'

...and reading the whole post about your (unlucky) experience, I decided NOT to remove the systemd-boot package after the apt dist-upgrade step, and rather reboot immediately. The system booted as expected with PVE9.

However, the warning is still there, I'm not really sure now if my current setup is actually correct and, if not, what I'm supposed to do in order to fix it.

To give some context, I'll paste the output of the snippets provided by @Stoiko Ivanov:

Code:
$> proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
6B8D-1D9B is configured with: grub (versions: 6.14.8-2-pve, 6.8.12-13-pve)
6B8D-7C65 is configured with: grub (versions: 6.14.8-2-pve, 6.8.12-13-pve)

Code:
$> dmesg |grep secure
[    0.000000] secureboot: Secure boot enabled
[    0.011017] secureboot: Secure boot enabled

Code:
$> mount |grep -i efi
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)

Code:
$> dpkg -l |grep -Ei 'systemd-boot|grub'
ii  grub-common                          2.12-9+pmx2                     amd64        GRand Unified Bootloader (common files)
ii  grub-efi-amd64                       2.12-9+pmx2                     amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 version)
ii  grub-efi-amd64-bin                   2.12-9+pmx2                     amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 modules)
ii  grub-efi-amd64-signed                1+2.12+9+pmx2                   amd64        GRand Unified Bootloader, version 2 (amd64 UEFI signed by Debian)
ii  grub-efi-amd64-unsigned              2.12-9+pmx2                     amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 images)
ii  grub-pc-bin                          2.12-9+pmx2                     amd64        GRand Unified Bootloader, version 2 (PC/BIOS modules)
ii  grub2-common                         2.12-9+pmx2                     amd64        GRand Unified Bootloader (common files for version 2)
ii  proxmox-grub                         2.12-9+pmx2                     amd64        Empty package to ensure Proxmox Grub packages are installed
ii  systemd-boot                         257.7-1+pmx2                    amd64        simple UEFI boot manager - integration and services
ii  systemd-boot-efi:amd64               257.7-1                         amd64        simple UEFI boot manager - EFI binaries
ii  systemd-boot-tools                   257.7-1                         amd64        simple UEFI boot manager - tools

Let me know if you need further details.

Thank you in advance for the help!
 
Last edited:
Subscribing to this as I'm too getting this failure in pve8to9 after upgrading to PVE9. Reboots fine. It's at test node so it was a clean PVE8 install and immediate upgrade to PVE9. I've not follow the advice of the failure msg yet and not messed around with any boot related packages (especially removals).
 
Last edited:
I'd probably remove the /boot/efi mount from /etc/fstab, run proxmox-boot-tool reinit to get the system in a consistent state!
(Keep in mind that changing the boot-loader configuration, can lead to a unbootable system - so plan for a bit of downtime)
Finally got round to dealing with this:

1. So changed /etc/fstab to:
Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
#UUID=0EB4-B1FB /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
UUID=8b767abd-5bed-489d-8321-3e00a6d40057 /mnt/pve/Storage ext4 defaults 0 2

2. Entered:
Code:
umount /boot/efi

3. Entered:
Code:
proxmox-boot-tool reinit

& received output:
Code:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="0EB4-B1FB" SIZE="1073741824" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme0n1" MOUNTPOINT=""
Mounting '/dev/disk/by-uuid/0EB4-B1FB' on '/var/tmp/espmounts/0EB4-B1FB'.
mount: (hint) your fstab has been modified, but systemd still uses
       the old version; use 'systemctl daemon-reload' to reload.
Installing grub x86_64 target..
Installing for x86_64-efi platform.
Installation finished. No error reported.
Installing grub x86_64 target (removable)..
Installing for x86_64-efi platform.
Installation finished. No error reported.
Unmounting '/dev/disk/by-uuid/0EB4-B1FB'.
Adding '/dev/disk/by-uuid/0EB4-B1FB' to list of synced ESPs..

so entered:
Code:
systemctl daemon-reload

proxmox-boot-tool reinit
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="0EB4-B1FB" SIZE="1073741824" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme0n1" MOUNTPOINT=""
Mounting '/dev/disk/by-uuid/0EB4-B1FB' on '/var/tmp/espmounts/0EB4-B1FB'.
Installing grub x86_64 target..
Installing for x86_64-efi platform.
Installation finished. No error reported.
Installing grub x86_64 target (removable)..
Installing for x86_64-efi platform.
Installation finished. No error reported.
Unmounting '/dev/disk/by-uuid/0EB4-B1FB'.
Adding '/dev/disk/by-uuid/0EB4-B1FB' to list of synced ESPs..

4. Then rebooted node perfectly. Total of only 3 minutes downtime!


I still have one outstanding issue after the PVE9 upgrade:

With a pointer from the above wiki, & a lot of messing around, I eventually edited /etc/apparmor.d/usr.sbin.cupsd & /etc/apparmor.d/usr.sbin.cups-browsed with abi <abi/3.0>, at the beginning & then issued apparmor_parser -r for both these profiles.
This seems to have quieted down the logs, although it seems they remain noisy if a print job is sent but the printer is turned off. I’ll have to look into that sometime.
However after every reboot, I still get the logs flooded (every second!) with:
Code:
Sep 01 10:47:07 MINS-PRXMX kernel: audit: type=1400 audit(1756712827.401:605): apparmor="DENIED" operation="create" class="net" info="failed protocol match" error=-13 profile="/usr/sbin/cups-browsed" pid=1299 comm="cups-browsed" family="unix" sock_type="stream" protocol=0 requested="create" denied="create" addr=none
Sep 01 10:47:08 MINS-PRXMX kernel: audit: type=1400 audit(1756712828.402:606): apparmor="DENIED" operation="create" class="net" info="failed protocol match" error=-13 profile="/usr/sbin/cups-browsed" pid=1299 comm="cups-browsed" family="unix" sock_type="stream" protocol=0 requested="create" denied="create" addr=none
Sep 01 10:47:09 MINS-PRXMX kernel: audit: type=1400 audit(1756712829.402:607): apparmor="DENIED" operation="create" class="net" info="failed protocol match" error=-13 profile="/usr/sbin/cups-browsed" pid=1299 comm="cups-browsed" family="unix" sock_type="stream" protocol=0 requested="create" denied="create" addr=none
Sep 01 10:47:10 MINS-PRXMX kernel: audit: type=1400 audit(1756712830.402:608): apparmor="DENIED" operation="create" class="net" info="failed protocol match" error=-13 profile="/usr/sbin/cups-browsed" pid=1299 comm="cups-browsed" family="unix" sock_type="stream" protocol=0 requested="create" denied="create" addr=none

Entering (again):
Code:
apparmor_parser -r /etc/apparmor.d/usr.sbin.cups-browsed
stops these logs.

Any ideas?
 
Thanks for the feedback and the report!

We improved the check for the boot-loader issues (but it's still not available on our public repositories) - https://lore.proxmox.com/pve-devel/20250814120807.2653672-1-s.ivanov@proxmox.com/T/#t

I'd guess that this version should have caught the issue you ran into - but to make sure we're not missing another edge-case - could you please post:
* /var/log/apt/term.log (or the rotated variant that covers the dist-upgrade to 9)
* `proxmox-boot-tool status`
* `dmesg |grep secure` # this one to see if the system is using secureboot
* `mount |grep -i efi`
* `dpkg -l |grep -ei 'systemd-boot|grub'`

Thanks!

I'm running 'pve8to9' on one of the nodes in my cluster (homelab)... My node is on the non-subscription repositories and currently at v8.4.12 (fully upgraded). I too am getting the message below.... can I trust the recommendation to remove 'systemd-boot'? You mentioned an improved check in the works. I'm looking for some confirmation before I break something. :) System is reported as EFI boot in the PVE UI and has a single SSD installed with an LVM main partition after the BIOS boot and EFI partitions.

Code:
INFO: Checking bootloader configuration...
FAIL: systemd-boot meta-package installed. This will cause problems on upgrades of other boot-related packages. Remove 'systemd-boot' See https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#sd-boot-warning for more information.

Code:
root@pve02:/etc/apt# efibootmgr
BootCurrent: 0001
Timeout: 3 seconds
BootOrder: 0001,0002,0003,0006,0000
Boot0000* Windows Boot Manager
Boot0001* proxmox
Boot0002* Hard Drive
Boot0003* UEFI: Built-in EFI Shell
Boot0006* UEFI OS

Verbose Boot0001 entry:
Code:
Boot0001* proxmox       HD(2,GPT,50f43012-97e9-4c99-b9ed-9355f8cc975c,0x800,0x200000)/File(\EFI\PROXMOX\SHIMX64.EFI)
 
Last edited:
System is reported as EFI boot in the PVE UI and has a single SSD installed with an LVM main partition after the BIOS boot and EFI partitions
I have the same setup (+ secure boot disabled).

What does lsblk output.
 
I have the same setup (+ secure boot disabled).

What does lsblk output.
Output is below... To be fair, I'm learning about UEFI and boot managers are a bit of a mystery to me. It's a homelab and my VMs and LXCs are all backed up but I got a bit concerned about the "pve8to9" tool missing something in an earlier post. I'm fully updated but I'm not clear if my version of the tool is still carrying boot eval issues... Hence, not wanting to blindly follow it! And, I'm trying to take this opportunity to learn.

Code:
root@pve02:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
nvme0n1                      259:0    0 238.5G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0 237.5G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0  69.4G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   1.4G  0 lvm
  │ └─pve-data-tpool         252:4    0 141.2G  0 lvm
  │   ├─pve-data             252:5    0 141.2G  1 lvm
  │   ├─pve-vm--102--disk--0 252:6    0     4M  0 lvm
  │   ├─pve-vm--102--disk--1 252:7    0    20G  0 lvm
  │   ├─pve-vm--103--disk--0 252:8    0     4M  0 lvm
  │   ├─pve-vm--103--disk--1 252:9    0    20G  0 lvm
  │   ├─pve-vm--108--disk--0 252:10   0     4M  0 lvm
  │   ├─pve-vm--108--disk--1 252:11   0    20G  0 lvm
  │   ├─pve-vm--109--disk--0 252:12   0     4M  0 lvm
  │   └─pve-vm--109--disk--1 252:13   0    20G  0 lvm
  └─pve-data_tdata           252:3    0 141.2G  0 lvm
    └─pve-data-tpool         252:4    0 141.2G  0 lvm
      ├─pve-data             252:5    0 141.2G  1 lvm
      ├─pve-vm--102--disk--0 252:6    0     4M  0 lvm
      ├─pve-vm--102--disk--1 252:7    0    20G  0 lvm
      ├─pve-vm--103--disk--0 252:8    0     4M  0 lvm
      ├─pve-vm--103--disk--1 252:9    0    20G  0 lvm
      ├─pve-vm--108--disk--0 252:10   0     4M  0 lvm
      ├─pve-vm--108--disk--1 252:11   0    20G  0 lvm
      ├─pve-vm--109--disk--0 252:12   0     4M  0 lvm
      └─pve-vm--109--disk--1 252:13   0    20G  0 lvm
 
Last edited:
I'm learning about UEFI and boot managers are a bit of a mystery to me.
You are not alone.

Based on your above posted output, it appears you have a very similar setup to mine. I have secure-boot disabled as shown by:
Code:
:~# dmesg | grep secure
[    0.000000] secureboot: Secure boot disabled
[    0.020166] secureboot: Secure boot disabled
(Interestingly, this output only appears after my above changes, see my above post & the historic zero output to this command).

I can't advise you as to whether you should actually remove the systemd-boot meta-package, but I do believe that even if you did & then (worst-case) encountered a non-bootable situation, you could retrace the steps outlined in this thread, to make it bootable again.
 
I'm digging further... it appears that proxmox-boot-tool may not have been properly initiated on my machine??? Drawing from above steps...

Code:
root@pve02:/etc/kernel# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.

Should I doing the following before I try to fix the pve8to9 fail about systemd-boot meta package being installed?
1. Change /etc/fstab to comment out the UUID mount to /boot/efi
2. Manually unmount /boot/efi with: umount /boot/efi
3. Initialize proxmox-boot-tool with: proxmox-boot-tool init <UUID> grub <-- should this be /dev/<PARTITION> ???
4. Restart with systemctl daemon-reload
5. Run proxmox-boot-tool status to verify
6. Reboot system and continue looking at pve8to9 fail message for systemd-boot meta package

Apologies here.... I'm being pedantic (on purpose) as I'm trying to learn but also to NOT bork my node. :)
 
Last edited:
Should I doing the following before I try to fix the pve8to9 fail about systemd-boot meta package being installed?
1. Change /etc/fstab to comment out the UUID mount to /boot/efi
2. Manually unmount /boot/efi with: umount /boot/efi
3. Initialize proxmox-boot-tool with: proxmox-boot-tool init <UUID> grub <-- should this be /dev/<PARTITION> ???
4. Restart with systemctl daemon-reload
5. Run proxmox-boot-tool status to verify
6. Reboot system and continue looking at pve8to9 fail message for systemd-boot meta package
This looks good. In your case point 3 should probably be: proxmox-boot-tool init /dev/nvme0n1p2
You probably can omit point 4.

Take care, & don't fret if you encounter a boot issue, it is probably fixable!

I take extra caution & make a complete disk image (of the host OS disk) before major changes - this involves extra downtime but helps my BP!
 
I am preparing the upgrade from PVE 8.4.12 to PVE 9.
I run 'pve8to9' in the shell of my proxmox host and got the following error message:
1757083057158.png
I am not sure how to step forward. When I remove 'systemd-boot' as said will a new 'systemd-boot' for PVE 9 be installed during the upgrade?
Disk configuration:
1757083533556.png
Please help!
 
This looks good. In your case point 3 should probably be: proxmox-boot-tool init /dev/nvme0n1p2
You probably can omit point 4.

Take care, & don't fret if you encounter a boot issue, it is probably fixable!

I take extra caution & make a complete disk image (of the host OS disk) before major changes - this involves extra downtime but helps my BP!
OK... proxmox-boot-tool status now reports:
Code:
root@pve02:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
63F9-9972 is configured with: grub (versions: 6.5.11-4-pve, 6.5.13-6-pve, 6.8.12-11-pve, 6.8.12-14-pve)

Using pve8tp9 --full again, the boot fail message has changed to the following:
Code:
FAIL: systemd-boot meta-package installed this will cause issues on upgrades of boot-related packages. Install 'systemd-boot-efi' and 'systemd-boot-tools' explicitly and remove 'systemd-boot'

So, I tried apt install systemd-boot-efi and apt install systemd-boot-tools... results below:
Code:
root@pve02:~# apt install systemd-boot-efi
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
systemd-boot-efi is already the newest version (252.38-1~deb12u1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@pve02:~# apt install systemd-boot-tools
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package systemd-boot-tools

I've not issued the apt remove systemd-boot yet... the message is not clear but I'm still on PVE v8. Should I be doing the remove and then doing the explicit install of systemd-boot-efi and systemd-boot-tools?? I'm concerned that systemd-boot-tools can't be located! :(

EDIT: is the issue that systemd-boot-tools package is only available in Trixie repositories???
 
Last edited:
I think I'm in the same boat...

After performing the update & rebooting here's where I'm at :

Code:
root@pve:~# pveversion
pve-manager/9.0.6/49c767b70aeb6648 (running kernel: 6.14.11-1-pve)
root@pve:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
5919-1038 is configured with: uefi (versions: 6.14.11-1-pve, 6.8.12-14-pve)
6859-ECE1 is configured with: uefi (versions: 6.14.11-1-pve, 6.8.12-14-pve)

And `pve8to9` is showing the same failure :
Code:
INFO: Checking bootloader configuration...
FAIL: systemd-boot meta-package installed this will cause issues on upgrades of boot-related packages. Install 'systemd-boot-efi' and 'systemd-boot-tools' explicitly and remove 'systemd-boot'

I'm a little bit anxious to install 'systemd-boot-efi' and 'systemd-boot-tools', and remove 'systemd-boot'. :rolleyes:
 
so I did :
Code:
apt install systemd-boot-efi systemd-boot-tools
apt remove systemd-boot
apt purge  systemd-boot

It told me systemd-boot-efi and systemd-boot-tools are already installed...
Rebooted, all seems fine, `pve8to9` is all green.
 
  • Like
Reactions: matt69
After performing the update & rebooting here's where I'm at :

Code:
root@pve:~# pveversion
pve-manager/9.0.6/49c767b70aeb6648 (running kernel: 6.14.11-1-pve)
root@pve:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
5919-1038 is configured with: uefi (versions: 6.14.11-1-pve, 6.8.12-14-pve)
6859-ECE1 is configured with: uefi (versions: 6.14.11-1-pve, 6.8.12-14-pve)

And `pve8to9` is showing the same failure :
Code:
INFO: Checking bootloader configuration...
FAIL: systemd-boot meta-package installed this will cause issues on upgrades of boot-related packages. Install 'systemd-boot-efi' and 'systemd-boot-tools' explicitly and remove 'systemd-boot'
OK... so my confusion is about running pve8to9 BEFORE changing the repositories and actually doing the update... I'm getting the following BEFORE:

Code:
FAIL: systemd-boot meta-package installed this will cause issues on upgrades of boot-related packages. Install 'systemd-boot-efi' and 'systemd-boot-tools' explicitly and remove 'systemd-boot'

I was under the impression that, if there is a FAIL in the pve8to9output, you must clear these before doing the upgrade to v9... is that not true? Should I just change the repsitories over, do the update... then recheck pve8to9before and after rebooting?