Hello all
Recently I experienced a very strange issue after upgrading a node from proxmox 6 to 7. We had run the pve6to7 and no error was prompted. The followed the wiki article and made all actions text book. Our node was installed with proxmox iso 6.x and was upgrading the latest version before doing the version upgrade. System is all running in zfs.
After the upgrade finished we rebooted the node and on the boot load we got the bellow errors relating with ZFS that rpool is uncorrectable. We cannot even enter on emergency mode in this kernel still do not get the prompt shell to issue zfs commands.
Now if we reboot on the previous kernel from proxmox boot menu (5.4.203 ) we boot without problem and the node gets online and working. The only issue identified is that with the latest 7.x it has problems starting windows vms as the stuck on boot screen, but any linux server is working and starting fine
Kernel 5.15.131-2 is not booting at all.
I tried also to install kernel headers, removed and added the same kernel version. Install 5.19.x kernel from the opt-in but still all prompt with the kernel crash error above.
I searched for similar cases in forum or the internet and not found anything apart from an issue to be fixed in 5.15.131-1 kernel and most possible -3 to be released.
Does this issue is related to the zfs beeing upgraded to higher version and the datastore missing a feature needed ?
Thank you and hope anyone could assist in the pretty wierd case
Recently I experienced a very strange issue after upgrading a node from proxmox 6 to 7. We had run the pve6to7 and no error was prompted. The followed the wiki article and made all actions text book. Our node was installed with proxmox iso 6.x and was upgrading the latest version before doing the version upgrade. System is all running in zfs.
After the upgrade finished we rebooted the node and on the boot load we got the bellow errors relating with ZFS that rpool is uncorrectable. We cannot even enter on emergency mode in this kernel still do not get the prompt shell to issue zfs commands.
Now if we reboot on the previous kernel from proxmox boot menu (5.4.203 ) we boot without problem and the node gets online and working. The only issue identified is that with the latest 7.x it has problems starting windows vms as the stuck on boot screen, but any linux server is working and starting fine
Code:
root@pve2:~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.4.203-1-pve)
pve-manager: 7.4-17 (running version: 7.4-17/513c62be)
pve-kernel-5.15: 7.4-9
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.131-2-pve: 5.15.131-3
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.4-1
proxmox-backup-file-restore: 2.4.4-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-5
pve-firmware: 3.6-6
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.14-pve1
Kernel 5.15.131-2 is not booting at all.
I tried also to install kernel headers, removed and added the same kernel version. Install 5.19.x kernel from the opt-in but still all prompt with the kernel crash error above.
I searched for similar cases in forum or the internet and not found anything apart from an issue to be fixed in 5.15.131-1 kernel and most possible -3 to be released.
Does this issue is related to the zfs beeing upgraded to higher version and the datastore missing a feature needed ?
Thank you and hope anyone could assist in the pretty wierd case