(Thread moved on request @fabian )
Hello, we have upgraded 6 PVE 8 servers via apt successfully and without any glitch or problem. But number 7 gives us headaches. This one is the only server with ZFS as boot / root file system, the others have LVM / ext4 setups. We upgraded to the latest 8.4 version and booted into that successfully. Then we did all the steps outlined in the docs and finally an apt dist-upgrade. This ended with :
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 6.14.8-2-pve /boot/vmlinuz-6.14.8-2-pve
update-initramfs: Generating /boot/initrd.img-6.14.8-2-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/B336-C9A1
Copying kernel 6.14.8-2-pve
Copying kernel 6.8.12-13-pve
/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/disk/by-id/scsi-35002538840116350-part3'.
run-parts: /etc/initramfs/post-update.d//proxmox-boot-sync exited with return code 1
run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1
which means we now have a fully upgraded 9.0 system running on kernel 6.8.12 as shown by the 8to9 script:
Checking proxmox-ve package version..
PASS: already upgraded to Proxmox VE 9
Checking running kernel version..
WARN: unexpected running and installed kernel '6.8.12-13-pve'.
I do not even dare to reboot in this state. The installation is pretty normal, the ZFS status of the "rpool" is fine and I cannot see why it could build a new boot config when upgrading to the latest 8.4 but not when doing 9.0 30 minutes later. Does anybody have a hint how to get out of this short of reinstalling?
I also tried to create the links to the disks manually (/dev/disk/by-id/scsi-35002538840116350-part3 and one other disk) but then grub-probe complains about "# grub-probe /
grub-probe: error: unknown filesystem."
The lsblk output says (the relevant disks are sda and sde, the rest are also ZFS but used for VMs and not relevant for booting)
root@pve04:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 476.9G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 475.9G 0 part
sdb 8:16 0 1.7T 0 disk
├─sdb1 8:17 0 1.7T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 9.1T 0 disk
├─sdc1 8:33 0 9.1T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 9.1T 0 disk
├─sdd1 8:49 0 9.1T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 476.9G 0 disk
├─sde1 8:65 0 1007K 0 part
├─sde2 8:66 0 1G 0 part
└─sde3 8:67 0 475.9G 0 part
sdf 8:80 0 1.7T 0 disk
├─sdf1 8:81 0 1.7T 0 part
└─sdf9 8:89 0 8M 0 part
sr0 11:0 1 1024M 0 rom
zd0 230:0 0 200G 0 disk
└─zd0p1 230:1 0 200G 0 part
zd16 230:16 0 50G 0 disk
zd32 230:32 0 2T 0 disk
├─zd32p1 230:33 0 512M 0 part
├─zd32p2 230:34 0 19.9G 0 part
├─zd32p3 230:35 0 4G 0 part
├─zd32p4 230:36 0 1.9T 0 part
└─zd32p5 230:37 0 1M 0 part
zd48 230:48 0 100G 0 disk
zd64 230:64 0 100G 0 disk
├─zd64p1 230:65 0 1M 0 part
├─zd64p2 230:66 0 2G 0 part
└─zd64p3 230:67 0 98G 0 part
zd80 230:80 0 100G 0 disk
├─zd80p1 230:81 0 1M 0 part
├─zd80p2 230:82 0 2G 0 part
└─zd80p3 230:83 0 98G 0 part
zd96 230:96 0 100G 0 disk
├─zd96p1 230:97 0 1M 0 part
├─zd96p2 230:98 0 1G 0 part
├─zd96p3 230:99 0 59G 0 part
└─zd96p4 230:100 0 40G 0 part
zd112 230:112 0 1M 0 disk
zd128 230:128 0 100G 0 disk
├─zd128p1 230:129 0 100M 0 part
├─zd128p2 230:130 0 16M 0 part
├─zd128p3 230:131 0 99.3G 0 part
└─zd128p4 230:132 0 562M 0 part
zd144 230:144 0 400G 0 disk
├─zd144p1 230:145 0 200G 0 part
└─zd144p2 230:146 0 200G 0 part
zd160 230:160 0 4G 0 disk
├─zd160p1 230:161 0 2.2G 0 part
├─zd160p2 230:162 0 94M 0 part
├─zd160p3 230:163 0 476M 0 part
└─zd160p4 230:164 0 952.7M 0 part
zd176 230:176 0 50G 0 disk
├─zd176p1 230:177 0 1M 0 part
├─zd176p2 230:178 0 2G 0 part
└─zd176p3 230:179 0 48G 0 part
zd192 230:192 0 50G 0 disk
├─zd192p1 230:193 0 1M 0 part
├─zd192p2 230:194 0 2G 0 part
└─zd192p3 230:195 0 48G 0 part
zd208 230:208 0 1M 0 disk
zd224 230:224 0 150G 0 disk
zd240 230:240 0 100G 0 disk
├─zd240p1 230:241 0 1M 0 part
├─zd240p2 230:242 0 2G 0 part
└─zd240p3 230:243 0 98G 0 part
zd256 230:256 0 4M 0 disk
zd272 230:272 0 100G 0 disk
├─zd272p1 230:273 0 1M 0 part
├─zd272p2 230:274 0 2G 0 part
└─zd272p3 230:275 0 98G 0 part
zd288 230:288 0 100G 0 disk
├─zd288p1 230:289 0 600M 0 part
├─zd288p2 230:290 0 1G 0 part
└─zd288p3 230:291 0 98.4G 0 part
zpool status says (for the relevant rpool):
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:00:24 with 0 errors on Sun Aug 10 00:24:29 2025
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-35002538840116350-part3 ONLINE 0 0 0
scsi-3500253884011633d-part3 ONLINE 0 0 0
Glad for any hints, JC
Hello, we have upgraded 6 PVE 8 servers via apt successfully and without any glitch or problem. But number 7 gives us headaches. This one is the only server with ZFS as boot / root file system, the others have LVM / ext4 setups. We upgraded to the latest 8.4 version and booted into that successfully. Then we did all the steps outlined in the docs and finally an apt dist-upgrade. This ended with :
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 6.14.8-2-pve /boot/vmlinuz-6.14.8-2-pve
update-initramfs: Generating /boot/initrd.img-6.14.8-2-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/B336-C9A1
Copying kernel 6.14.8-2-pve
Copying kernel 6.8.12-13-pve
/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/disk/by-id/scsi-35002538840116350-part3'.
run-parts: /etc/initramfs/post-update.d//proxmox-boot-sync exited with return code 1
run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1
which means we now have a fully upgraded 9.0 system running on kernel 6.8.12 as shown by the 8to9 script:
Checking proxmox-ve package version..
PASS: already upgraded to Proxmox VE 9
Checking running kernel version..
WARN: unexpected running and installed kernel '6.8.12-13-pve'.
I do not even dare to reboot in this state. The installation is pretty normal, the ZFS status of the "rpool" is fine and I cannot see why it could build a new boot config when upgrading to the latest 8.4 but not when doing 9.0 30 minutes later. Does anybody have a hint how to get out of this short of reinstalling?
I also tried to create the links to the disks manually (/dev/disk/by-id/scsi-35002538840116350-part3 and one other disk) but then grub-probe complains about "# grub-probe /
grub-probe: error: unknown filesystem."
The lsblk output says (the relevant disks are sda and sde, the rest are also ZFS but used for VMs and not relevant for booting)
root@pve04:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 476.9G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 475.9G 0 part
sdb 8:16 0 1.7T 0 disk
├─sdb1 8:17 0 1.7T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 9.1T 0 disk
├─sdc1 8:33 0 9.1T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 9.1T 0 disk
├─sdd1 8:49 0 9.1T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 476.9G 0 disk
├─sde1 8:65 0 1007K 0 part
├─sde2 8:66 0 1G 0 part
└─sde3 8:67 0 475.9G 0 part
sdf 8:80 0 1.7T 0 disk
├─sdf1 8:81 0 1.7T 0 part
└─sdf9 8:89 0 8M 0 part
sr0 11:0 1 1024M 0 rom
zd0 230:0 0 200G 0 disk
└─zd0p1 230:1 0 200G 0 part
zd16 230:16 0 50G 0 disk
zd32 230:32 0 2T 0 disk
├─zd32p1 230:33 0 512M 0 part
├─zd32p2 230:34 0 19.9G 0 part
├─zd32p3 230:35 0 4G 0 part
├─zd32p4 230:36 0 1.9T 0 part
└─zd32p5 230:37 0 1M 0 part
zd48 230:48 0 100G 0 disk
zd64 230:64 0 100G 0 disk
├─zd64p1 230:65 0 1M 0 part
├─zd64p2 230:66 0 2G 0 part
└─zd64p3 230:67 0 98G 0 part
zd80 230:80 0 100G 0 disk
├─zd80p1 230:81 0 1M 0 part
├─zd80p2 230:82 0 2G 0 part
└─zd80p3 230:83 0 98G 0 part
zd96 230:96 0 100G 0 disk
├─zd96p1 230:97 0 1M 0 part
├─zd96p2 230:98 0 1G 0 part
├─zd96p3 230:99 0 59G 0 part
└─zd96p4 230:100 0 40G 0 part
zd112 230:112 0 1M 0 disk
zd128 230:128 0 100G 0 disk
├─zd128p1 230:129 0 100M 0 part
├─zd128p2 230:130 0 16M 0 part
├─zd128p3 230:131 0 99.3G 0 part
└─zd128p4 230:132 0 562M 0 part
zd144 230:144 0 400G 0 disk
├─zd144p1 230:145 0 200G 0 part
└─zd144p2 230:146 0 200G 0 part
zd160 230:160 0 4G 0 disk
├─zd160p1 230:161 0 2.2G 0 part
├─zd160p2 230:162 0 94M 0 part
├─zd160p3 230:163 0 476M 0 part
└─zd160p4 230:164 0 952.7M 0 part
zd176 230:176 0 50G 0 disk
├─zd176p1 230:177 0 1M 0 part
├─zd176p2 230:178 0 2G 0 part
└─zd176p3 230:179 0 48G 0 part
zd192 230:192 0 50G 0 disk
├─zd192p1 230:193 0 1M 0 part
├─zd192p2 230:194 0 2G 0 part
└─zd192p3 230:195 0 48G 0 part
zd208 230:208 0 1M 0 disk
zd224 230:224 0 150G 0 disk
zd240 230:240 0 100G 0 disk
├─zd240p1 230:241 0 1M 0 part
├─zd240p2 230:242 0 2G 0 part
└─zd240p3 230:243 0 98G 0 part
zd256 230:256 0 4M 0 disk
zd272 230:272 0 100G 0 disk
├─zd272p1 230:273 0 1M 0 part
├─zd272p2 230:274 0 2G 0 part
└─zd272p3 230:275 0 98G 0 part
zd288 230:288 0 100G 0 disk
├─zd288p1 230:289 0 600M 0 part
├─zd288p2 230:290 0 1G 0 part
└─zd288p3 230:291 0 98.4G 0 part
zpool status says (for the relevant rpool):
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:00:24 with 0 errors on Sun Aug 10 00:24:29 2025
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-35002538840116350-part3 ONLINE 0 0 0
scsi-3500253884011633d-part3 ONLINE 0 0 0
Glad for any hints, JC