Can't boot pve-kernel-5.4.101-1-pve / mismatch of zfs versions 2.0.3 & kmod-0.8.3

fixjunk

Member
Nov 14, 2020
20
10
8
I'm not sure which is the problem and which is the symptom or if they're independent...

I was checking status of my pools and saw the message about running zpool upgrade. I'm fine with that, this is not production and there's no concern about running with older versions of zfs.

However upgrade zpool fails with: cannot set property for 'tank': invalid feature 'redaction_bookmarks'.

I dug around and was led to check zpool version which returns:
Code:
root@pve:~ # zpool version
zfs-2.0.3-pve2
zfs-kmod-0.8.3-pve1

So then I follow the google path to checking that I'm on the compatible kernel for zfs 2.0.3.

I check my installed kernel with uname -r and it's 5.4.34-1-pve.
However I ran pve-efiboot-tool kernel list which showed me three available versions:
5.4.101-1-pve
5.4.78-2.pve
5.4.34-1-pve

But I'm not sure why 34 is the one that loads.

I'm using systemd-boot / UEFI and the no-subscription repo

Thanks for any help.
 
Additional info: /etc/kernel/pve-efiboot-uuids was showing outdated information (I replaced both drives). So I removed the original file. Can I just copy the new vfat UUID for the new drives into a new file? e.g. lsblk -f | grep vfat ?
 
Additional info: /etc/kernel/pve-efiboot-uuids was showing outdated information (I replaced both drives).
Did you follow the document guide to "Changing a failed bootable device"?

https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_zfs_change_failed_dev

At least it seems like you did not register the new ESP and thus the kernel helper tool does not sync over the new kernel's image and initrds.

Can I just copy the new vfat UUID for the new drives into a new file?

Well, does it include a valid vfat formated ESP? I.e., how did you replace those disks?

Do not manually edit that file but rather use the pve-efiboot-tool init <partition> command, if you already have a VFAT EFI System Partion there.
 
Did you follow the document guide to "Changing a failed bootable device"?

https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_zfs_change_failed_dev

At least it seems like you did not register the new ESP and thus the kernel helper tool does not sync over the new kernel's image and initrds.



Well, does it include a valid vfat formated ESP? I.e., how did you replace those disks?

Do not manually edit that file but rather use the pve-efiboot-tool init <partition> command, if you already have a VFAT EFI System Partion there.
I forget what process I used.

I believe it started with a fresh install and migration of data from the old pair (raid z mirror).

Can I run the pve-efiboot-tool init command on a running system without breaking it? it should have the EFI VFAT partitions in place.
 
Can I run the pve-efiboot-tool init command on a running system without breaking it? it should have the EFI VFAT partitions in place.

Yes. Yes I can. And did. And it didn't explode.
result:
Code:
root@pve:~ # uname -r
5.4.101-1-pve
root@pve:~ # zpool version
zfs-2.0.3-pve2
zfs-kmod-2.0.3-pve2

Thanks for pointing me in the right direction.
 
Ah sorry, missed your reply. And yeah, the init is rather harmless in the sense of data destroying potential, and if one has the vfat partitions it is fine and wanted to be run (for potential future other readers).

Glad you could fix it!
 
  • Like
Reactions: fixjunk

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!