Import ZFS pools by device scanning was skipped because of an unmet condition check

silvered.dragon

Renowned Member
Nov 4, 2015
123
4
83
Dear all,
I have a couple of HP gen8 dl360 running latest proxmox 8.1.3 with the same issue, when they start I can clearly se a critical red error on sceen

Code:
cannot import 'tank-zfs': no such pool available

but then both starts good without any issue. Both servers(node4 and node5) are using an HP220 HBA SAS Adapter flashed in IT mode and latest firmware 15.10.10.00 and are part of a 5 node cluster,

Relevant logs:
Code:
Jan 04 19:02:44 node4 systemd[1]: Starting zfs-import-cache.service - Import ZFS pools by cache file...
Jan 04 19:02:44 node4 systemd[1]: zfs-import-scan.service - Import ZFS pools by device scanning was skipped because of an unmet condition check (ConditionFileNotEmpty=!/etc/zfs/zpool.cache).
Jan 04 19:02:44 node4 systemd[1]: Starting zfs-import@tank\x2dzfs.service - Import ZFS pool tank\x2dzfs...
Jan 04 19:02:44 node4 zpool[1294]: cannot import 'tank-zfs': no such pool available
Jan 04 19:02:44 node4 systemd[1]: zfs-import@tank\x2dzfs.service: Main process exited, code=exited, status=1/FAILURE
Jan 04 19:02:44 node4 systemd[1]: zfs-import@tank\x2dzfs.service: Failed with result 'exit-code'.
Jan 04 19:02:44 node4 systemd[1]: Failed to start zfs-import@tank\x2dzfs.service - Import ZFS pool tank\x2dzfs.
Jan 04 19:02:46 node4 systemd[1]: Finished zfs-import-cache.service - Import ZFS pools by cache file.
Jan 04 19:02:46 node4 systemd[1]: Reached target zfs-import.target - ZFS pool import target.

Here is my storage configuration and status:
Bash:
root@node4:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl
        shared 0

zfspool: local-zfs
        pool rpool/data
        content images,rootdir
        sparse 1

rbd: ceph
        content rootdir,images
        krbd 1
        nodes node2,node3,node1
        pool ceph

zfspool: tank-zfs
        pool tank-zfs
        content images,rootdir
        mountpoint /tank-zfs
        nodes node5,node4
        sparse 0

pbs: pbs2-datastore
        datastore pbs2-datastore
        server 192.168.25.112
        content backup
        fingerprint 73:69:eb:81:68:24:d9:00:a4:bc:34:cc:fe:db:79:4a:b6:f0:a6:74:7d:63:4f:7e:97:ee:74:7c:f8:64:37:fa
        prune-backups keep-all=1
        username root@pam

pbs: pbs1-datastore
        datastore pbs1-datastore
        server 192.168.25.113
        content backup
        fingerprint a9:31:b8:87:9e:42:fc:af:06:38:ec:fb:ec:80:5d:9d:99:ba:be:4e:1e:4b:54:6e:7a:73:9e:24:8c:41:65:1b
        prune-backups keep-all=1
        username root@pam

root@node4:~# zpool status -v
  pool: rpool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:10:00 with 0 errors on Sun Dec 10 00:34:01 2023
config:

        NAME                              STATE     READ WRITE CKSUM
        rpool                             ONLINE       0     0     0
          mirror-0                        ONLINE       0     0     0
            scsi-35000c500235cf7ab-part3  ONLINE       0     0     0
            scsi-35000c500235cad3f-part3  ONLINE       0     0     0

errors: No known data errors

  pool: tank-zfs
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:00:06 with 0 errors on Sun Dec 10 00:24:11 2023
config:

        NAME                        STATE     READ WRITE CKSUM
        tank-zfs                    ONLINE       0     0     0
          raidz2-0                  ONLINE       0     0     0
            scsi-35000cca05936074c  ONLINE       0     0     0
            scsi-35000cca0592b5d84  ONLINE       0     0     0
            scsi-35000cca05934ae74  ONLINE       0     0     0
            scsi-35000cca0592b60fc  ONLINE       0     0     0
            scsi-35000cca059348bb4  ONLINE       0     0     0
            scsi-35000cca0592668dc  ONLINE       0     0     0

errors: No known data errors
root@node4:~# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx 1 root root  9 Jan  4 19:02 scsi-35000c500235cad3f -> ../../sdb
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000c500235cad3f-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000c500235cad3f-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000c500235cad3f-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Jan  4 19:02 scsi-35000c500235cf7ab -> ../../sda
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000c500235cf7ab-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000c500235cf7ab-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000c500235cf7ab-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Jan  4 19:02 scsi-35000cca0592668dc -> ../../sde
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca0592668dc-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca0592668dc-part9 -> ../../sde9
lrwxrwxrwx 1 root root  9 Jan  4 19:02 scsi-35000cca0592b5d84 -> ../../sdh
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca0592b5d84-part1 -> ../../sdh1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca0592b5d84-part9 -> ../../sdh9
lrwxrwxrwx 1 root root  9 Jan  4 19:02 scsi-35000cca0592b60fc -> ../../sdf
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca0592b60fc-part1 -> ../../sdf1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca0592b60fc-part9 -> ../../sdf9
lrwxrwxrwx 1 root root  9 Jan  4 19:02 scsi-35000cca059348bb4 -> ../../sdg
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca059348bb4-part1 -> ../../sdg1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca059348bb4-part9 -> ../../sdg9
lrwxrwxrwx 1 root root  9 Jan  4 19:02 scsi-35000cca05934ae74 -> ../../sdc
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca05934ae74-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca05934ae74-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 Jan  4 19:02 scsi-35000cca05936074c -> ../../sdd
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca05936074c-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 scsi-35000cca05936074c-part9 -> ../../sdd9
lrwxrwxrwx 1 root root  9 Jan  4 19:02 wwn-0x5000c500235cad3f -> ../../sdb
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000c500235cad3f-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000c500235cad3f-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000c500235cad3f-part3 -> ../../sdb3
lrwxrwxrwx 1 root root  9 Jan  4 19:02 wwn-0x5000c500235cf7ab -> ../../sda
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000c500235cf7ab-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000c500235cf7ab-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000c500235cf7ab-part3 -> ../../sda3
lrwxrwxrwx 1 root root  9 Jan  4 19:02 wwn-0x5000cca0592668dc -> ../../sde
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca0592668dc-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca0592668dc-part9 -> ../../sde9
lrwxrwxrwx 1 root root  9 Jan  4 19:02 wwn-0x5000cca0592b5d84 -> ../../sdh
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca0592b5d84-part1 -> ../../sdh1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca0592b5d84-part9 -> ../../sdh9
lrwxrwxrwx 1 root root  9 Jan  4 19:02 wwn-0x5000cca0592b60fc -> ../../sdf
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca0592b60fc-part1 -> ../../sdf1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca0592b60fc-part9 -> ../../sdf9
lrwxrwxrwx 1 root root  9 Jan  4 19:02 wwn-0x5000cca059348bb4 -> ../../sdg
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca059348bb4-part1 -> ../../sdg1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca059348bb4-part9 -> ../../sdg9
lrwxrwxrwx 1 root root  9 Jan  4 19:02 wwn-0x5000cca05934ae74 -> ../../sdc
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca05934ae74-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca05934ae74-part9 -> ../../sdc9
lrwxrwxrwx 1 root root  9 Jan  4 19:02 wwn-0x5000cca05936074c -> ../../sdd
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca05936074c-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Jan  4 19:02 wwn-0x5000cca05936074c-part9 -> ../../sdd9
 
So this just happens on reboot? Could be a issue that zfs cant find the disk that it needs to mount the pool. You can change the following parameter that might fix your problem. You can give zfs parameters to wait 5 seconds more, so udev can populate the disks before zfs wants to mount them.

Code:
echo "ZFS_INITRD_PRE_MOUNTROOT_SLEEP='5'" >> /etc/default/zfs &&
echo "ZFS_INITRD_POST_MODPROBE_SLEEP='5'" >> /etc/default/zfs &&
update-initramfs -u &&
proxmox-boot-tool refresh

also see: https://www.thomas-krenn.com/de/wik...pool_available_-_Proxmox_Boot_Problem_beheben
 
Last edited:
So this just happens on reboot? Could be a issue that zfs cant find the disk that it needs to mount the pool. You can change the following parameter that might fix your problem. You can give zfs parameters to wait 5 seconds more, so udev can populate the disks before zfs wants to mount them.

Code:
echo "ZFS_INITRD_PRE_MOUNTROOT_SLEEP='5'" >> /etc/default/zfs &&
echo "ZFS_INITRD_POST_MODPROBE_SLEEP='5'" >> /etc/default/zfs &&
update-initramfs -u &&
proxmox-boot-tool refresh

also see: https://www.thomas-krenn.com/de/wik...pool_available_-_Proxmox_Boot_Problem_beheben

Dear yes this just happens on reboot, but then everything works flawless and the pool is correctly imported, anyway I tried your code first time with 5 seconds and it didn't worked so I tried with 15 seconds and is the same. So no fix at the moment.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!