kernel update (6.8.12-13) prints error message

tessus

Active Member
Oct 5, 2020
49
4
28
Toronto, Canada
I got the following error message during today's apt-get dist-upgrade:

E: bootctl is not available - make sure systemd-boot is installed

As far as I remember this is the first time seeing this message during a kernel update. I haven't modified the system by uninstalling random packages, so I am rather puzzled about this message.
Should I be concerned that the machine won't reboot properly? Shall I install systemd-boot manually? What are my next steps?

Here's the full output:

Code:
Hit:1 http://deb.debian.org/debian bookworm-backports InRelease
Hit:2 http://security.debian.org bookworm-security InRelease
Hit:3 http://download.proxmox.com/debian/pve bookworm InRelease
Hit:4 http://ftp.ca.debian.org/debian bookworm InRelease
Hit:5 http://ftp.ca.debian.org/debian bookworm-updates InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
  proxmox-kernel-6.8.12-13-pve-signed
The following packages will be upgraded:
  proxmox-kernel-6.8 proxmox-kernel-helper pve-container
3 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/104 MB of archives.
After this operation, 577 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Reading changelogs... Done
Selecting previously unselected package proxmox-kernel-6.8.12-13-pve-signed.
(Reading database ... 102803 files and directories currently installed.)
Preparing to unpack .../proxmox-kernel-6.8.12-13-pve-signed_6.8.12-13_amd64.deb ...
Unpacking proxmox-kernel-6.8.12-13-pve-signed (6.8.12-13) ...
Preparing to unpack .../proxmox-kernel-6.8_6.8.12-13_all.deb ...
Unpacking proxmox-kernel-6.8 (6.8.12-13) over (6.8.12-12) ...
Preparing to unpack .../proxmox-kernel-helper_8.1.4_all.deb ...
Unpacking proxmox-kernel-helper (8.1.4) over (8.1.1) ...
Preparing to unpack .../pve-container_5.3.0_all.deb ...
Unpacking pve-container (5.3.0) over (5.2.7) ...
Setting up pve-container (5.3.0) ...
Setting up proxmox-kernel-6.8.12-13-pve-signed (6.8.12-13) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 6.8.12-13-pve /boot/vmlinuz-6.8.12-13-pve
update-initramfs: Generating /boot/initrd.img-6.8.12-13-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/B6B6-E1BE
    Copying kernel and creating boot-entry for 6.5.13-6-pve
    Copying kernel and creating boot-entry for 6.8.12-12-pve
    Copying kernel and creating boot-entry for 6.8.12-13-pve
    Removing old version 6.8.12-11-pve
    Removing old version 6.8.12-9-pve
Copying and configuring kernels on /dev/disk/by-uuid/B6B7-0946
    Copying kernel and creating boot-entry for 6.5.13-6-pve
    Copying kernel and creating boot-entry for 6.8.12-12-pve
    Copying kernel and creating boot-entry for 6.8.12-13-pve
    Removing old version 6.8.12-11-pve
    Removing old version 6.8.12-9-pve
run-parts: executing /etc/kernel/postinst.d/proxmox-auto-removal 6.8.12-13-pve /boot/vmlinuz-6.8.12-13-pve
run-parts: executing /etc/kernel/postinst.d/zz-proxmox-boot 6.8.12-13-pve /boot/vmlinuz-6.8.12-13-pve
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/B6B6-E1BE
    Copying kernel and creating boot-entry for 6.5.13-6-pve
    Copying kernel and creating boot-entry for 6.8.12-12-pve
    Copying kernel and creating boot-entry for 6.8.12-13-pve
Copying and configuring kernels on /dev/disk/by-uuid/B6B7-0946
    Copying kernel and creating boot-entry for 6.5.13-6-pve
    Copying kernel and creating boot-entry for 6.8.12-12-pve
    Copying kernel and creating boot-entry for 6.8.12-13-pve
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 6.8.12-13-pve /boot/vmlinuz-6.8.12-13-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.8.12-13-pve
Found initrd image: /boot/initrd.img-6.8.12-13-pve
/usr/sbin/grub-probe: error: unknown filesystem.
Found linux image: /boot/vmlinuz-6.8.12-12-pve
Found initrd image: /boot/initrd.img-6.8.12-12-pve
Found linux image: /boot/vmlinuz-6.8.12-11-pve
Found initrd image: /boot/initrd.img-6.8.12-11-pve
Found linux image: /boot/vmlinuz-6.8.12-9-pve
Found initrd image: /boot/initrd.img-6.8.12-9-pve
Found linux image: /boot/vmlinuz-6.5.13-6-pve
Found initrd image: /boot/initrd.img-6.5.13-6-pve
Found linux image: /boot/vmlinuz-5.13.19-2-pve
Found initrd image: /boot/initrd.img-5.13.19-2-pve
Found linux image: /boot/vmlinuz-5.4.124-1-pve
Found initrd image: /boot/initrd.img-5.4.124-1-pve
Found linux image: /boot/vmlinuz-5.4.73-1-pve
Found initrd image: /boot/initrd.img-5.4.73-1-pve
/usr/sbin/grub-probe: error: unknown filesystem.
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
Adding boot menu entry for UEFI Firmware Settings ...
done
Setting up proxmox-kernel-helper (8.1.4) ...
Installing new version of config file /etc/kernel/postinst.d/zz-proxmox-boot ...
Installing new version of config file /etc/kernel/postrm.d/zz-proxmox-boot ...
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="B6B6-E1BE" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme1n1" MOUNTPOINT=""
Mounting '/dev/disk/by-uuid/B6B6-E1BE' on '/var/tmp/espmounts/B6B6-E1BE'.
Installing systemd-boot..
E: bootctl is not available - make sure systemd-boot is installed
Setting up proxmox-kernel-6.8 (6.8.12-13) ...
Processing triggers for initramfs-tools (0.142+deb12u3) ...
update-initramfs: Generating /boot/initrd.img-6.8.12-13-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/B6B6-E1BE
    Copying kernel and creating boot-entry for 6.5.13-6-pve
    Copying kernel and creating boot-entry for 6.8.12-12-pve
    Copying kernel and creating boot-entry for 6.8.12-13-pve
Copying and configuring kernels on /dev/disk/by-uuid/B6B7-0946
    Copying kernel and creating boot-entry for 6.5.13-6-pve
    Copying kernel and creating boot-entry for 6.8.12-12-pve
    Copying kernel and creating boot-entry for 6.8.12-13-pve
Processing triggers for pve-manager (8.4.5) ...
Processing triggers for man-db (2.11.2-2) ...
Processing triggers for pve-ha-manager (4.0.7) ...
 
Known Issues & Breaking Changes
Node Management

Systems booting via UEFI from a ZFS on root setup should install the systemd-boot package after the upgrade.
The systemd-boot was split out from the systemd package for Debian Bookworm based releases. It won't get installed automatically upon upgrade from Proxmox VE 7.4 as it can cause trouble on systems not booting from UEFI with ZFS on root setup by the Proxmox VE installer.
Systems which have ZFS on root and boot in UEFI mode will need to manually install it if they need to initialize a new ESP (see the output of proxmox-boot-tool status and the relevant documentation).
Note that the system remains bootable even without the package installed (the bootloader that was copied to the ESPs during intialization remains untouched), so you can also install it after the upgrade was finished.
It is not recommended installing systemd-boot on systems which don't need it, as it would replace grub as bootloader in its postinst script.
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_8.0
 
Thanks for the reply. I've been running 8.x since it was released (upgraded from 7), so I am still a bit fuzzy on why I see this message only now.

Either way, the output of proxmox-boot-tool status is the following:

Code:
# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
B6B6-E1BE is configured with: uefi (versions: 6.5.13-6-pve, 6.8.12-12-pve, 6.8.12-13-pve)
B6B7-0946 is configured with: uefi (versions: 6.5.13-6-pve, 6.8.12-12-pve, 6.8.12-13-pve)

I am using 2 NVMEs in a RAID-1 for the OS (Proxmox ISO). So I reckon that I should install the systemd-boot package (even though it has booted up perfectly until now), correct?
 
I am using 2 NVMEs in a RAID-1 for the OS (Proxmox ISO). So I reckon that I should install the systemd-boot package (even though it has booted up perfectly until now), correct?
yes - installing systemd-boot is needed to update the boot-loader copies on the ESPs.
 
It rebooted but the Proxmox upgrade doc suggests to remove it. The only reason I manually installed it was because I was told so in this topic....

So what shall I do? Do I unstall it or not? And when? Before or after the upgrade?
Thanks for the reminder about this thread!
both the pve8to9 check and our recommendation have changed (since I suggested to keep systemd-boot installed).
Currently you should uninstall the package if you don't need it (no root on ZFS with uefi) - and install the meta package in all cases after the upgrade:
See : https://forum.proxmox.com/threads/proxmox-virtual-environment-9-0-released.169258/post-790698
the pve8to9 script should tell you when to do what as well

I hope this helps!
 
Thanks for the info, Stoiko.

Ok, I ran pve8to9 --full, but it didn't complain about systemd-boot. I got 2 warnings: one that VMs are running and the other that the amd64-microcode package was not installed. Both of which I will take care of when I upgrade.

However, I am more concerned about my interface names. My interfaces already use predictable names like enp4s0 and enp6s0. I created bond0 and the bridge vmbr0 and vmbr0.64@vmbr
IMO it makes no sense that a kernel > 6.8.12 would change those predictable names to something else. Especially, if I don't ask for it. But the upgrade document mentioned that they would be changed.
I looked at the pinning tool, but it automatically creates nicX names, which are a lot less specific as e.g. enp4s0. I just want to keep my current interface names, since they make the most sense.
 
IMO it makes no sense that a kernel > 6.8.12 would change those predictable names to something else. Especially, if I don't ask for it. But the upgrade document mentioned that they would be changed.

unfortunately that is not in our hands.

there's a lot of parts involved in making up those "predictable" names:
- the system firmware (and how it presents the hardware to the rest of the system, which can change with reconfiguration or updates)
- the kernel and its drivers (which changed for a few devices going from 6.8 to 6.14, for example)
- systemd/udev (which just had an update to their policy on which attributes to incorporate by default into the names)
- the NIC itself (e.g., changing MAC might change the name)
- interface renaming configured by the admin

the name is only stable as long as all of that remains identical, which is not the case over a longer period of times with updates being installed.
 
  • Like
Reactions: tessus
What are my options then? Better yet, how do I keep my current if names? I guess only interface names for physical adapters would change. The bond and vbr names should not be touched, correct?
So the best course of action would be to use the pinning tool and change the generated interface names to the current ones. Done. Is that the correct way of making sure that the upgrade won't screw up my network config?
Usually I wouldn't care, but my node is headless. It's a hassle to connect everything to login locally and fix things. ;-)
 
it's a bit problematic to use names from the regular "namespace" as custom names, because then you might have clashes in the future. but as long as you don't change your NIC setup at all and do the pinning based on permanentmac, it might be fine.

with PVE 9, the support for altnames got a lot better, so unless you are unlucky, both the old and the custom pinned name should actually work.

@shanreich thoughts? ;)
 
However, I am more concerned about my interface names. My interfaces already use predictable names like enp4s0 and enp6s0. I created bond0 and the bridge vmbr0 and vmbr0.64@vmbr
IMO it makes no sense that a kernel > 6.8.12 would change those predictable names to something else. Especially, if I don't ask for it. But the upgrade document mentioned that they would be changed.

That's the problem with predictable interface names, predictable is a bit misleading sadly - since this doesn't mean they never change. systemd changed the naming scheme several times in the past, drivers changed and exposed more information leading to changed NIC names et cetera. I'd not get too attached to the names generated by systemd for aforementioned reasons.

As fabian already said, pinning them to their previous names is problematic since there might be clashes in the future - particularly when adding / removing PCIe devices. Though you can change the names arbitrarily basically. Instead of nic0 you can define the names to pin one-by-one - you don't need to use nic as a prefix:

Code:
pve-network-interface-pinning generate --interface enp1s0 --target-name mycustomname1

You could give them a descriptive name like storage or guest?
 
Last edited:
This is my problem. I do not have a descriptive name for them. These 2 physical adapters are used in a LACP (bond0) .
The descriptive name that is used right now make the most sense, because they even describe the location of the adapters.

Maybe I just call them link1 and link2. I read that they should start with en though so that the Proxmox GUI can access them or maybe I am mixing this info up with something else. Although I am not using the GUI for the network config of the node anyway.

I also read that the pinning tool creates files with the extension .new

Do I have to copy them over the original files, or will this be done automatically during the next reboot? I guess the only place that really matters is /etc/network/interfaces where I only have to replace the old with the new names, correct?

To be exact, my current cfg looks like this:

Code:
auto lo
iface lo inet loopback

auto enp4s0
iface enp4s0 inet manual

auto enp6s0
iface enp6s0 inet manual

auto bond0
iface bond0 inet manual
    bond-slaves enp4s0 enp6s0
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer2

auto vmbr0
iface vmbr0 inet manual
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094

auto vmbr0.64
iface vmbr0.64 inet static
    address 192.168.64.10/20
    gateway 192.168.64.1

Thus I run

Code:
pve-network-interface-pinning generate --interface enp4s0 --target-name link1
pve-network-interface-pinning generate --interface enp6s0 --target-name link2

and change the config to:

Code:
auto lo
iface lo inet loopback

auto link1
iface link1 inet manual

auto link2
iface link2 inet manual

auto bond0
iface bond0 inet manual
    bond-slaves link1 link2
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer2

auto vmbr0
iface vmbr0 inet manual
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094

auto vmbr0.64
iface vmbr0.64 inet static
    address 192.168.64.10/20
    gateway 192.168.64.1
 
Maybe I just call them link1 and link2. I read that they should start with en though so that the Proxmox GUI can access them or maybe I am mixing this info up with something else. Although I am not using the GUI for the network config of the node anyway.

They should still be picked up by our tooling, even if they don't have en prefixes.

Do I have to copy them over the original files, or will this be done automatically during the next reboot? I guess the only place that really matters is /etc/network/interfaceswhere I only have to replace the old with the new names, correct?

This will be done automatically on reboot.