PVE 6.0.4 pve-kernel-5.0.15-1-pve cannot import rpool no such pool or dataset

Madhatter

Renowned Member
Apr 8, 2012
37
2
73
Upgraded from PVE 5.x to 6.x following the upgrade instruction and the reboot resulted into
"cannot import rpool no such pool or dataset"

However, I can boot using the kernel 4.15.18.18-pve and everything else seems fine.

Wasn't there some while ago the same issue when introducing 4.8?

Andreas

Edit: actually not everything, some KVM have had the KVM hardware virtualisation enabled and disdnt start unless I disabled it

Hyper-V TLB flush support (requested by 'hv-tlbflush' cpu flag) is not supported by kernel
kvm: kvm_init_vcpu failed: Function not implemented

Probably another issue and not for here so Ill get more details for that.
 
please include your 'pveversion -v' output, as well as "zpool import" and "lsblk" output from a failing reboot, and "zpool status" and "lsblk" output from a successful boot.
 
root@proxmox:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 4.15.18-18-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-17-pve: 4.15.18-43
pve-kernel-4.15.18-16-pve: 4.15.18-41
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-13-pve: 4.15.18-37
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-3-pve: 4.15.18-22
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph: 12.2.12-pve1
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1

This is a standalone test host
 
That's new, is this from the upgrade?

root@proxmox:~# zpool import

pool: zroot
id: 15023591304253786846
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://zfsonlinux.org/msg/ZFS-8000-EY
config:

zroot UNAVAIL unsupported feature(s)
zd144 ONLINE



root@proxmox:/var/log# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 465.8G 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 465.8G 0 disk
├─sdb1 8:17 0 465.8G 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 931.5G 0 disk
├─sdc1 8:33 0 100M 0 part /var/lib/ceph/osd/ceph-0
└─sdc2 8:34 0 931.4G 0 part
sdd 8:48 0 931.5G 0 disk
├─sdd1 8:49 0 100M 0 part /var/lib/ceph/osd/ceph-1
└─sdd2 8:50 0 931.4G 0 part
sde 8:64 0 55.9G 0 disk
├─sde1 8:65 0 55.9G 0 part
└─sde9 8:73 0 8M 0 part
sdf 8:80 0 29.5G 0 disk
├─sdf1 8:81 0 1007K 0 part
├─sdf2 8:82 0 29.5G 0 part
└─sdf9 8:89 0 8M 0 part
sr0 11:0 1 1024M 0 rom
zd0 230:0 0 3.6G 0 disk
zd16 230:16 0 40G 0 disk
├─zd16p1 230:17 0 500M 0 part
└─zd16p2 230:18 0 39.5G 0 part
zd32 230:32 0 50G 0 disk
├─zd32p1 230:33 0 1M 0 part
└─zd32p2 230:34 0 50G 0 part
zd48 230:48 0 70G 0 disk
├─zd48p1 230:49 0 500M 0 part
└─zd48p2 230:50 0 69.5G 0 part
zd64 230:64 0 40G 0 disk
zd80 230:80 0 20G 0 disk
├─zd80p1 230:81 0 500M 0 part
└─zd80p2 230:82 0 19.5G 0 part
zd96 230:96 0 60G 0 disk
├─zd96p1 230:97 0 500M 0 part
└─zd96p2 230:98 0 59.5G 0 part
zd112 230:112 0 6.4G 0 disk
zd128 230:128 0 70G 0 disk
├─zd128p1 230:129 0 549M 0 part
└─zd128p2 230:130 0 69.5G 0 part
zd144 230:144 0 100G 0 disk
├─zd144p1 230:145 0 512K 0 part
├─zd144p2 230:146 0 2G 0 part
└─zd144p3 230:147 0 98G 0 part
zram0 252:0 0 12G 0 disk [SWAP]




root@proxmox:/var/log# zpool status
pool: rpool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0B in 0 days 00:06:22 with 0 errors on Sun Apr 14 00:30:24 2019
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sdf2 ONLINE 0 0 0

errors: No known data errors

pool: zfs500G
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0B in 0 days 02:49:17 with 0 errors on Sun Apr 14 03:13:20 2019
config:

NAME STATE READ WRITE CKSUM
zfs500G ONLINE 0 0 0
ata-ST3500413AS_Z2AM2122 ONLINE 0 0 0
ata-WDC_WD5000AAKS-65YGA0_WD-WCAS85908569 ONLINE 0 0 0
cache
sde ONLINE 0 0 0
 
Last edited:
suUlEpnYARNEtR855fh68oSiRZs9c0sK-dNDnm0QntFbNm1FbRThhGHiV2D-TODGCnIRBMVNa-ectMfLbaSErI3AcEarBUf743r7mMaUKxFJZLOpAFATho_4CU_6is0OqRZbGJSYTWwPFdVuJS2DaCWrLKiynIpq45-6b5HmPL5PuRr_rvHFiC5FEL_gJoNWJbigX5YFcRTpMq4xdVLW4pubQlMUPHRLvowPX4ZIx4womG42KudsQrTjrGSgfuiQQ-AjB3xsViEV89rbuEvnJz1tC0Zy0S0Ab7mUmlp8hVJRUQX1BhVnHt2H1ckJft_PqtoyVunYq_j5f6hxXPpN7nGE4BNZVvZ-LPpu3eMsDElQHGG3GZ7_YB9NwNTrnkaliLUa5HE3qWGc7-ZnwnmpSx_2iE44XJOJ95t-p98n1Rdx905zPHIiVgmZF62Ny34Mwjrb5_M_qCyxGec6tql0Mv_AW3934AIfTzXnmAL0vG9MrYfjvvZM0AKzeCdj_tg_cdFF2Eb6uCt_mif_fgUG-E7u7etCYcZ5RM51X1Lw1vtbpUXHPhiQUz8vUj1zWRre0lMV-k6nMzIVD_qofow-VI2RGFfF7rmlNCaawDO5UVrwZ0Z90_AQ1dy16TaCOrRqqIF62n9qFmS-9x1tBNO1UYPe4NE6z93G=w1399-h1049-no
 
you did not post "zpool import" and "lsblk" output from the failed attempt (you are dropped into an initramfs shell with the new kernel, right?)
 
you did not post "zpool import" and "lsblk" output from the failed attempt (you are dropped into an initramfs shell with the new kernel, right?)

Thats correct the outputs are all from working.
when failing I only get to the initramfs
 
Thats correct the outputs are all from working.
when failing I only get to the initramfs

so please post the output I requested from the initramfs ;)
 
i was able to import rpool manually but it complained about / is not empty

did you check that this is not a regression to the link I gave you earlier?
 
if it works when you manually import in the initramfs (you should use '-N' btw to import without mounting at this stage, since the initramfs scripts will mount / into a directory and then pivot), that means that your block devices become visible too late. consider adding "rootdelay=30" or some appropriate value to your kernel cmdline (or increase it, if you already have it set).
 
rootdelay=30 didn't do the job

I got it booting by

edit /etc/default/grub and changed GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=60 quiet")
run # update-grub

edit /etc/default/zfs, set ZFS_INITRD_PRE_MOUNTROOT_SLEEP='4',
run # update-initramfs -k 5.0.15-1-pve -u

that finally booted to 5.x

That's a workaround, not a solution.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!