Failed to start Import ZFS pool

I'm having the "Failed to start Import ZFS pool [pool]" issue on 3 of our nodes. ZFS works fine, but the error is disconcerting. Here's what I found after some testing on a PVE 7.0-11 node that hadn't had ZFS setup on it before. I'm haven't checked PVE 7.2, so not sure if this is still relevant.

Analysis​

  • The failure message Does Not appear after creating a zpool via command line.
    • ZFS relies on zfs-import-cache.service and zfs-import-scan.service to import pools at boot.
  • The failure message Does appear after creating a zpool via the Proxmox web ui.
    • Proxmox seems to be creating a static service entry (zfs-import@[pool].service) to ensure the pool gets loaded, instead of relying on the zfs-import-cache.service or zfs-import-scan.service. This is similar to what OpenZFS recommends in their Debian Bullseye Root on ZFS guide.
    • The problem is that the zfs-import-cache.service runs before the zfs-import@[pool].service. Since the pool is already imported by the cache service, zfs-import@[pool].service fails to import it.
      • May 17 14:06:35 pve10 zpool[1276]: cannot import 'tank02': pool already exists
  • The failure message Continues to appear after the zpool is destroyed.
    • You can't destroy a zpool from within the Proxmox web ui.
    • Destroying the zpool from the command line has no effect on the static service entry created by Proxmox.

Solution​

My preferred solution is to remove the service entry created by Proxmox, and let the ZFS import-cache and import-scan services do their thing.
Bash:
root@pve10:~# ls /etc/systemd/system/zfs-import.target.wants
zfs-import-cache.service  zfs-import-scan.service  zfs-import@tank02.service
root@pve10:~# rm /etc/systemd/system/zfs-import.target.wants/zfs-import@tank02.service
root@pve10:~# ls /etc/systemd/system/zfs-import.target.wants
zfs-import-cache.service  zfs-import-scan.service
root@pve10:~# reboot
root@pve10:~# systemctl | grep zfs-import
  zfs-import-cache.service        loaded active     exited    Import ZFS pools by cache file
  zfs-import.target               loaded active     active    ZFS pool import target

Alternatively, you could try to get the static service entry to run before the import-cache service. I tried doing this by adding Before=zfs-import-cache.service to the static service, but it seems like this prevented import-cache from running. There's probably a better way to do this, as I'm no Linux expert.
Bash:
root@pve10:~# nano /etc/systemd/system/zfs-import.target.wants/zfs-import@tank02.service
[Unit]
Description=Import ZFS pool %i
Documentation=man:zpool(8)
DefaultDependencies=no
After=systemd-udev-settle.service
After=cryptsetup.target
After=multipathd.target
Before=zfs-import.target
Before=zfs-import-cache.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zpool import -N -d /dev/disk/by-id -o cachefile=none %I

[Install]
WantedBy=zfs-import.target

root@pve10:~# reboot
root@pve10:~# systemctl | grep zfs-import
  zfs-import@tank02.service     loaded active     exited    Import ZFS pool tank02
  zfs-import.target             loaded active     active    ZFS pool import target

Reference Info​

Manually created zpool does not cause error​

Bash:
root@pve10:~# zpool status
no pools available
root@pve10:~# ls /etc/systemd/system/zfs-import.target.wants
zfs-import-cache.service  zfs-import-scan.service
root@pve10:~# zpool create -o ashift=12 tank01 raidz2 sdb sdc sdd sde
root@pve10:~# zpool status
  pool: tank01
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        tank01      ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0

root@pve10:~# ls /etc/systemd/system/zfs-import.target.wants
zfs-import-cache.service  zfs-import-scan.service
root@pve10:~# reboot
root@pve10:~# cat /var/log/syslog | grep 'Failed to start Import ZFS pool'
root@pve10:~# zpool status
  pool: tank01
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        tank01      ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0

root@pve10:~# ls /etc/systemd/system/zfs-import.target.wants
zfs-import-cache.service  zfs-import-scan.service
root@pve10:~# systemctl | grep zfs-import
  zfs-import-cache.service        loaded active     exited    Import ZFS pools by cache file
  zfs-import.target               loaded active     active    ZFS pool import target
root@pve10:~# zpool destroy tank01

Proxmox created zpool causes error​

  • Login to Proxmox and select node
  • Expand "Disks" and select "ZFS"
  • Click "Create: ZFS"
    • Name: tank02
    • RAID Level: RAIDZ2
    • Compression: on
    • ashift: 12
Bash:
root@pve10:~# zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank02  14.5T  1.62M  14.5T        -         -     0%     0%  1.00x    ONLINE  -
root@pve10:~# ls /etc/systemd/system/zfs-import.target.wants
zfs-import-cache.service  zfs-import-scan.service  zfs-import@tank02.service
root@pve10:~# systemctl | grep zfs-import
  zfs-import-cache.service        loaded active     exited    Import ZFS pools by cache file
  zfs-import.target               loaded active     active    ZFS pool import target
root@pve10:~# reboot
root@pve10:~# cat /var/log/syslog | grep 'Failed to start Import ZFS pool'
May 17 14:06:35 pve10 systemd[1]: Failed to start Import ZFS pool tank02.
root@pve10:~# zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank02  14.5T  1.62M  14.5T        -         -     0%     0%  1.00x    ONLINE  -
root@pve10:~# systemctl | grep zfs-import
  zfs-import-cache.service        loaded active     exited    Import ZFS pools by cache file
● zfs-import@tank02.service       loaded failed     failed    Import ZFS pool tank02
  zfs-import.target               loaded active     active    ZFS pool import target
root@pve10:~# systemctl status zfs-import@tank02.service
Warning: The unit file, source configuration file or drop-ins of zfs-import@tank02.service changed on disk. Run 'systemctl daemon-reload' to reload >
● zfs-import@tank02.service - Import ZFS pool tank02
     Loaded: loaded (/lib/systemd/system/zfs-import@.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Wed 2023-05-17 14:06:35 EDT; 8min ago
       Docs: man:zpool(8)
    Process: 1276 ExecStart=/sbin/zpool import -N -d /dev/disk/by-id -o cachefile=none tank02 (code=exited, status=1/FAILURE)
   Main PID: 1276 (code=exited, status=1/FAILURE)
        CPU: 57ms

May 17 14:06:34 pve10 systemd[1]: Starting Import ZFS pool tank02...
May 17 14:06:35 pve10 zpool[1276]: cannot import 'tank02': pool already exists
May 17 14:06:35 pve10 systemd[1]: zfs-import@tank02.service: Main process exited, code=exited, status=1/FAILURE
May 17 14:06:35 pve10 systemd[1]: zfs-import@tank02.service: Failed with result 'exit-code'.
May 17 14:06:35 pve10 systemd[1]: Failed to start Import ZFS pool tank02.

Destroying the zpool has no effect on service entry created by Proxmox​

Bash:
root@pve10:~# ls /etc/systemd/system/zfs-import.target.wants
zfs-import-cache.service  zfs-import-scan.service  zfs-import@tank02.service
root@pve10:~# zpool destroy tank02
root@pve10:~# ls /etc/systemd/system/zfs-import.target.wants
zfs-import-cache.service  zfs-import-scan.service  zfs-import@tank02.service
 
  • Like
Reactions: Wod and szx80
I'm also having this error and I don't have any ZFS pools! The system runs fine but the boot is quite slow because of it

1692381344187.png
 
Alright! I just disabled the services and it boots correctly!
systemctl disable zfs-import-cache.service
systemctl disable zfs-import-scan.service
 
if your pve host does not boot from zfs but other fs and you get zpool import problem on startup - what does "zpool import" show when manually issued after boot ?

import issue can occur for example, when there are two pools on different disks with the same name. zpool import will show
 
if your pve host does not boot from zfs but other fs and you get zpool import problem on startup - what does "zpool import" show when manually issued after boot ?

import issue can occur for example, when there are two pools on different disks with the same name. zpool import will show
It says “no pools available to import”
 
but the disks / partitions are shown in fdisk -l ?

they must be either missing or zpool disk scan goes wrong and does not find the partitions. then you may specify the path to device files with -d
 
Last edited:
but the disks / partitions are shown in fdisk -l ?

they must be either missing or zpool disk scan goes wrong and does not find the partitions. then you may specify the path to device files with -d
I'm sorry I'm missing some context, let me explain!

1. I'm running Proxmox with one NVME disk where the local storage is. There's no ZFS pool or anything configured in that regard
2. I do have another two NVME drives that I'm passing through the TrueNAS Scale VM (maybe is this the problem?) where I have a ZFS pool called "ssd"

Here's the output of fdisk -l

Code:
root@pve:~# fdisk -l
Disk /dev/nvme0n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: Samsung SSD 980 500GB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: E6BBAD9D-ED6B-49A1-94A5-1916D72B1DFD

Device           Start       End   Sectors   Size Type
/dev/nvme0n1p1      34      2047      2014  1007K BIOS boot
/dev/nvme0n1p2    2048   1050623   1048576   512M EFI System
/dev/nvme0n1p3 1050624 976773134 975722511 465.3G Linux LVM

Partition 1 does not start on physical sector boundary.


Disk /dev/mapper/pve-swap: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes


Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes


Disk /dev/mapper/pve-vm--103--disk--0: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes


Disk /dev/mapper/pve-vm--103--disk--1: 44 GiB, 47244640256 bytes, 92274688 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: 18CB40FF-1271-402E-BC5A-414D882E9E59

Device                                   Start      End  Sectors  Size Type
/dev/mapper/pve-vm--103--disk--1-part1    2048    67583    65536   32M EFI System
/dev/mapper/pve-vm--103--disk--1-part2   67584   116735    49152   24M Linux filesystem
/dev/mapper/pve-vm--103--disk--1-part3  116736   641023   524288  256M Linux filesystem
/dev/mapper/pve-vm--103--disk--1-part4  641024   690175    49152   24M Linux filesystem
/dev/mapper/pve-vm--103--disk--1-part5  690176  1214463   524288  256M Linux filesystem
/dev/mapper/pve-vm--103--disk--1-part6 1214464  1230847    16384    8M Linux filesystem
/dev/mapper/pve-vm--103--disk--1-part7 1230848  1427455   196608   96M Linux filesystem
/dev/mapper/pve-vm--103--disk--1-part8 1427456 92274654 90847199 43.3G Linux filesystem


Disk /dev/mapper/pve-vm--101--disk--0: 19 GiB, 20401094656 bytes, 39845888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes


Disk /dev/mapper/pve-vm--102--state--truenascore: 64.49 GiB, 69243764736 bytes, 135241728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes


Disk /dev/mapper/pve-vm--103--state--ha20221103: 8.49 GiB, 9114222592 bytes, 17801216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes


Disk /dev/mapper/pve-vm--102--disk--0: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: C74522EC-B8EF-4AD5-832F-FACB65AA5064

Device                                   Start      End  Sectors  Size Type
/dev/mapper/pve-vm--102--disk--0-part1    4096     6143     2048    1M BIOS boot
/dev/mapper/pve-vm--102--disk--0-part2    6144  1054719  1048576  512M EFI System
/dev/mapper/pve-vm--102--disk--0-part3 1054720 67108830 66054111 31.5G Solaris /usr & Apple ZFS


Disk /dev/mapper/pve-vm--102--state--truenascore2: 64.49 GiB, 69243764736 bytes, 135241728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes


Disk /dev/mapper/pve-vm--102--state--truenasscale: 64.49 GiB, 69243764736 bytes, 135241728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes


Disk /dev/mapper/pve-vm--100--disk--0: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes


Disk /dev/mapper/pve-vm--102--state--Angelfish: 64.49 GiB, 69243764736 bytes, 135241728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes


Disk /dev/mapper/pve-vm--104--disk--0: 4 GiB, 4294967296 bytes, 8388608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 131072 bytes
 
>I do have another two NVME drives that I'm passing through the TrueNAS Scale VM

why does fdisk -l not show that drives in pve ?
I have no idea, checking the web panel it's not showing there either!1692438583240.png

But looking at the TrueNAS VM so you can the last 2 PCI Devices are the NVME disks
1692438611696.png

1692438623861.png
 
I had the same problem as JonathonFS (see #21 in this thread)
https://forum.proxmox.com/threads/failed-to-start-import-zfs-pool.109347/post-557760

I created several ZFS pools via Proxmox GUI. After each Proxmox host restart I had corresponding error messages in the log:

Code:
Failed to start zfs-import-cache.service - Import ZFS pools by cache file.Failed to start zfs-import-cache.service - Import ZFS pools by cache file.

I imported an older existing zpool from TrueNAS or XigmaNAS. This one was, if I remember correctly, created via CLI under XigmaNAS. With this zpool the above mentioned error message did not appear in the Proxmox host log.

I now have moved the files

Code:
/etc/system/system/zfs-import.target.wants/zfs-import@<POOLNAME>.service

which are created automatically as soon as a zpool is created under the Proxmox GUI into a subfolder under

Code:
/etc/system/system/zfs-import.target.wants/BAK

Since then I have no more error messages in the log.

I would be interested to know what these files are and why the error in the log occurs, especially since the zpools were created directly under Proxmox GUI and so everything should be fine without any trouble.

What is the problem here? Or why does this problem occur?
 
Hi. Some new proxmox updates have been relased in the last few days. I was wondering if it soves the problem of zfs-import@<POOLNAME>.service not imported service errors. I have just started on proxmox so just a beginner. In the mean time, as a work around, I have renamed zfs-import@<POOLNAME>.service to zfs-import@<POOLNAME>.service.bak and moved it to a backup folder /etc/systemd/system/zfs-import.target.wants/BAKS. The attached screenshot show the updates which may possibly address the issue, thogh I am not sure if they do.
 

Attachments

  • Proxmox Pool Import error-possible updates.png
    Proxmox Pool Import error-possible updates.png
    99.7 KB · Views: 33
yup I fixed it by disabling the import scan service as I don't need it

systemctl disable --now zfs-import-scan.service
Thanks for the quick reply. But i bet i will have some issue if i ever need an ZFS pool on proxmox?
 
Thanks for the quick reply. But i bet i will have some issue if i ever need an ZFS pool on proxmox?
you could always enable it back if you need to, I just found out that Proxmox was trying to start the ZFS pool from my TrueNAS disks at boot so I disabled it!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!