ZFS FAILED mirror but works fine?????

Hello everyone,

I am still new to ZFS and I am not yet very knowledgeable with ZFS. (But I am trying to learn it.)

Recently rebooted Proxmox VE 8 a couple of times.
Every time it booted into Proxmox VE 8 itself, I noticed a ZFS error message right before the login screen.
Since it happened everytime I rebooted the server, I started to look at the journal logs to see what that message was. (Since I could only read "[FAILED] ZFS" before a got the login screen.)

In the journal log I found this:
Mar 25 21:40:59 metal-1 systemd[1]: Starting zfs-import@VM\x2ddisk.service - Import ZFS pool VM\x2ddisk...
░░ Subject: A start job for unit zfs-import@VM\x2ddisk.service has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit zfs-import@VM\x2ddisk.service has begun execution.
░░
░░ The job identifier is 136.
Mar 25 21:40:59 metal-1 zpool[1190]: cannot import 'VM-disk': no such pool available
Mar 25 21:40:59 metal-1 systemd[1]: zfs-import@VM\x2ddisk.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ An ExecStart= process belonging to unit zfs-import@VM\x2ddisk.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Mar 25 21:40:59 metal-1 systemd[1]: zfs-import@VM\x2ddisk.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit zfs-import@VM\x2ddisk.service has entered the 'failed' state with result 'exit-code'.
Mar 25 21:40:59 metal-1 systemd[1]: Failed to start zfs-import@VM\x2ddisk.service - Import ZFS pool VM\x2ddisk.
░░ Subject: A start job for unit zfs-import@VM\x2ddisk.service has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit zfs-import@VM\x2ddisk.service has finished with a failure.
░░
░░ The job identifier is 136 and the job result is failed.
The thing that I do not understand is, why is VM-disk still available and working fine? (Even though the log tells me, it failed.)
The ZFS mirror contains my VM disk files and thus the VM will not work/boot without VM-disk working, but the VMs boot up just fine.

So I am a bit confused.
Why is it generating a fail error since I would asume a default normal ZFS message should not start with FAILED.
And why is it that even though it tells me it failed, the ZFS disks works as nothing is wrong.

Thanks in advance for any help.
 
And the rabbit hole goes even deeper.

I deleted the VM-disk pool and rebooted.
After that I created vm-zfs and used the two disks that I used for VM-disk and rebooted.
When it booted into Proxmox VE I got the same error as before but instead of ZFS telling it is pool VM-disk, it tells me it is vm-zfs. (So there goes that idea.)
 
I am creating the ZFS pool via the Proxmox VE GUI.
I go to the Disk > ZFS tab.
I selected the two disks, gave it a name and then I change it from single disk to mirror.

I also noticed that the error only happen when I enable Thin-Provisioning in the Datacenter for the ZFS pool.
If I have it off then I get no errors and when I have it on, I get the failed error again.

Directly after creation:
root@metal-1:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
VM-disk 1.81T 624K 1.81T - - 0% 0% 1.00x ONLINE -
mirror-0 1.81T 624K 1.81T - - 0% 0.00% - ONLINE
ata-CT2000MX500SSD1_1950E22DE41C 1.82T - - - - - - - ONLINE
ata-CT2000MX500SSD1_1951E22FE4B9 1.82T - - - - - - - ONLINE
rpool 1.81T 46.0G 1.77T - - 0% 2% 1.00x ONLINE -
mirror-0 1.81T 46.0G 1.77T - - 0% 2.48% - ONLINE
sdb3 1.82T - - - - - - - ONLINE
sda3 1.82T - - - - - - - ONLINE

After reboot without Thin-Provisioning:
root@metal-1:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
VM-disk 1.81T 528K 1.81T - - 0% 0% 1.00x ONLINE -
mirror-0 1.81T 528K 1.81T - - 0% 0.00% - ONLINE
ata-CT2000MX500SSD1_1950E22DE41C 1.82T - - - - - - - ONLINE
ata-CT2000MX500SSD1_1951E22FE4B9 1.82T - - - - - - - ONLINE
rpool 1.81T 46.1G 1.77T - - 0% 2% 1.00x ONLINE -
mirror-0 1.81T 46.1G 1.77T - - 0% 2.48% - ONLINE
sdb3 1.82T - - - - - - - ONLINE
sda3 1.82T - - - - - - - ONLINE

After reboot with Thin-Provisioning:
root@metal-1:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
VM-disk 1.81T 552K 1.81T - - 0% 0% 1.00x ONLINE -
mirror-0 1.81T 552K 1.81T - - 0% 0.00% - ONLINE
ata-CT2000MX500SSD1_1950E22DE41C 1.82T - - - - - - - ONLINE
ata-CT2000MX500SSD1_1951E22FE4B9 1.82T - - - - - - - ONLINE
rpool 1.81T 46.1G 1.77T - - 0% 2% 1.00x ONLINE -
mirror-0 1.81T 46.1G 1.77T - - 0% 2.48% - ONLINE
sdb3 1.82T - - - - - - - ONLINE
sda3 1.82T - - - - - - - ONLINE
 
Hi,
there is also a zfs-import-cache.service which might be the one that already imports your ZFS pool (which is why the VM disk was available). AFAIK, when the cache file would be corrupted or incomplete, then the zfs-import@.service becomes relevant.
 
In the journal log it tells me that it skipped the cache since there is no cache file.
Mar 26 09:53:14 metal-1 systemd[1]: Starting zfs-import-cache.service - Import ZFS pools by cache file...
░░ Subject: A start job for unit zfs-import-cache.service has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit zfs-import-cache.service has begun execution.
░░
░░ The job identifier is 152.
Mar 26 09:53:14 metal-1 systemd[1]: zfs-import-scan.service - Import ZFS pools by device scanning was skipped because of an unmet condition check (ConditionFileNotEmpty=!/etc/zfs/zpool.cache).
░░ Subject: A start job for unit zfs-import-scan.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit zfs-import-scan.service has finished successfully.
░░
░░ The job identifier is 148.
 
Please share the full excerpt from journalctl -b where the ZFS imports happen.
 
Checked the command output and seems good. I asked because recently I had an issue with one server which randomly changed PCI enumeration, causing drives to be renamed. That made ZFS not being able to find "/dev/sda" because it was sometimes /dev/sdb and so on. I ended creating the mirror using the disk UUID's (which I should have done initially, btw). Proxmox GUI uses UUID's too.
 
Hello,

I'm running into the same issue with a newly created NVME mirror pool.

I wanted to note an additional wrinkle, as well. If I create and destroy the mirror pool, the faulty systemd service that fails remains. So if I create and destroy the mirror 3 times, I end up with 3 failed services.

That seems to indicate that in addition to whatever happens to trigger the service failure in the first place, destroying the pool isn't fully cleaning things up. The definition for the failing service is left behind and has to be manually deleted to make the systemd failure warning go away.


In any case, while this doesn't necessarily break anything, it does pollute the systemd status and logs with unnecessary failure messages that introduce confusion and make troubleshooting more difficult, especially for new users who might (understandably!) assume there's an actual problem here. Creating a ZFS mirror via the GUI shouldn't create a systemd service that fails all the time and makes it unclear how/when the new pool is actually imported.

I'm left in a position of not actually knowing why some of my storage is working instead of being broken, which is ... not good.

Some logged output for troubleshooting.

I created the mirror pool via the node's GUI (Disks -> ZFS -> Create), and for the moment haven't enabled thin provisioning. That seems to make no difference.

Code:
# zpool list -v
NAME                                                      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
nvme4-mirror                                              928G   516K   928G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                                                928G   516K   928G        -         -     0%  0.00%      -    ONLINE
    nvme-Sabrent_Rocket_4.0_Plus_46F10721187000005518     931G      -      -        -         -      -      -      -    ONLINE
    nvme-Sabrent_Rocket_4.0_Plus_46F10721187000005511_1   931G      -      -        -         -      -      -      -    ONLINE

I get success from the cache-based importing service (there's no cache from what I can see), but the import service tied to the pool itself fails.

Code:
root@andromeda1:~# systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
     Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; preset: enabled)
     Active: active (exited) since Tue 2024-03-26 10:32:35 CDT; 12min ago
       Docs: man:zpool(8)
    Process: 1313 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN $ZPOOL_IMPORT_OPTS (code=exited, status=0/SUCCESS)
   Main PID: 1313 (code=exited, status=0/SUCCESS)
        CPU: 20ms

Mar 26 10:32:35 andromeda1 systemd[1]: Starting zfs-import-cache.service - Import ZFS pools by cache file...
Mar 26 10:32:35 andromeda1 systemd[1]: Finished zfs-import-cache.service - Import ZFS pools by cache file.

root@andromeda1:~# systemctl status zfs-import@nvme4\\x2dmirror.service
× zfs-import@nvme4\x2dmirror.service - Import ZFS pool nvme4\x2dmirror
     Loaded: loaded (/lib/systemd/system/zfs-import@.service; enabled; preset: enabled)
     Active: failed (Result: exit-code) since Tue 2024-03-26 10:32:35 CDT; 12min ago
       Docs: man:zpool(8)
    Process: 1314 ExecStart=/sbin/zpool import -N -d /dev/disk/by-id -o cachefile=none nvme4-mirror (code=exited, status=1/FAILURE)
   Main PID: 1314 (code=exited, status=1/FAILURE)
        CPU: 52ms

Mar 26 10:32:35 andromeda1 systemd[1]: Starting zfs-import@nvme4\x2dmirror.service - Import ZFS pool nvme4\x2dmirror...
Mar 26 10:32:35 andromeda1 zpool[1314]: cannot import 'nvme4-mirror': pool already exists
Mar 26 10:32:35 andromeda1 systemd[1]: zfs-import@nvme4\x2dmirror.service: Main process exited, code=exited, status=1/FAILURE
Mar 26 10:32:35 andromeda1 systemd[1]: zfs-import@nvme4\x2dmirror.service: Failed with result 'exit-code'.
Mar 26 10:32:35 andromeda1 systemd[1]: Failed to start zfs-import@nvme4\x2dmirror.service - Import ZFS pool nvme4\x2dmirror.

Here's what journalctl has to say.

Code:
root@andromeda1:~# journalctl -b | grep -i zfs
Mar 26 10:32:33 andromeda1 kernel: Command line: initrd=\EFI\proxmox\6.5.13-1-pve\initrd.img-6.5.13-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs
Mar 26 10:32:33 andromeda1 kernel: Kernel command line: initrd=\EFI\proxmox\6.5.13-1-pve\initrd.img-6.5.13-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs
Mar 26 10:32:33 andromeda1 kernel: Unknown kernel command line parameters "boot=zfs", will be passed to user space.
Mar 26 10:32:33 andromeda1 kernel:     boot=zfs
Mar 26 10:32:33 andromeda1 kernel: zfs: module license 'CDDL' taints kernel.
Mar 26 10:32:33 andromeda1 kernel: zfs: module license taints kernel.
Mar 26 10:32:33 andromeda1 kernel: ZFS: Loaded module v2.2.2-pve1, ZFS pool version 5000, ZFS filesystem version 5
Mar 26 10:32:33 andromeda1 systemd[1]: Created slice system-zfs\x2dimport.slice - Slice /system/zfs-import.
Mar 26 10:32:34 andromeda1 udevadm[850]: systemd-udev-settle.service is deprecated. Please fix zfs-import-scan.service, zfs-import-cache.service not to pull it in.
Mar 26 10:32:35 andromeda1 systemd[1]: Starting zfs-import-cache.service - Import ZFS pools by cache file...
Mar 26 10:32:35 andromeda1 systemd[1]: zfs-import-scan.service - Import ZFS pools by device scanning was skipped because of an unmet condition check (ConditionFileNotEmpty=!/etc/zfs/zpool.cache).
Mar 26 10:32:35 andromeda1 systemd[1]: Starting zfs-import@nvme4\x2dmirror.service - Import ZFS pool nvme4\x2dmirror...
Mar 26 10:32:35 andromeda1 systemd[1]: zfs-import@nvme4\x2dmirror.service: Main process exited, code=exited, status=1/FAILURE
Mar 26 10:32:35 andromeda1 systemd[1]: zfs-import@nvme4\x2dmirror.service: Failed with result 'exit-code'.
Mar 26 10:32:35 andromeda1 systemd[1]: Failed to start zfs-import@nvme4\x2dmirror.service - Import ZFS pool nvme4\x2dmirror.
Mar 26 10:32:35 andromeda1 systemd[1]: Finished zfs-import-cache.service - Import ZFS pools by cache file.
Mar 26 10:32:35 andromeda1 systemd[1]: Reached target zfs-import.target - ZFS pool import target.
Mar 26 10:32:35 andromeda1 systemd[1]: Starting zfs-mount.service - Mount ZFS filesystems...
Mar 26 10:32:35 andromeda1 systemd[1]: Starting zfs-volume-wait.service - Wait for ZFS Volume (zvol) links in /dev...
Mar 26 10:32:35 andromeda1 systemd[1]: Finished zfs-volume-wait.service - Wait for ZFS Volume (zvol) links in /dev.
Mar 26 10:32:35 andromeda1 systemd[1]: Reached target zfs-volumes.target - ZFS volumes are ready.
Mar 26 10:32:35 andromeda1 systemd[1]: Finished zfs-mount.service - Mount ZFS filesystems.
Mar 26 10:32:36 andromeda1 systemd[1]: Starting zfs-share.service - ZFS file system shares...
Mar 26 10:32:36 andromeda1 systemd[1]: Started zfs-zed.service - ZFS Event Daemon (zed).
Mar 26 10:32:36 andromeda1 zed[1789]: ZFS Event Daemon 2.2.2-pve2 (PID 1789)
Mar 26 10:32:36 andromeda1 systemd[1]: Finished zfs-share.service - ZFS file system shares.
Mar 26 10:32:36 andromeda1 systemd[1]: Reached target zfs.target - ZFS startup target.

Aside: is that "unknown kernel command" warning normal?

So, no errors that I can see there.

But at this point I also have no idea how the nvme pool gets imported, as the cache file isn't present and zfs-import-scan fails as well.
Code:
root@andromeda1:~# systemctl status zfs-import-scan.service
○ zfs-import-scan.service - Import ZFS pools by device scanning
     Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; enabled; preset: disabled)
     Active: inactive (dead)
  Condition: start condition failed at Tue 2024-03-26 10:32:35 CDT; 19min ago
             └─ ConditionFileNotEmpty=!/etc/zfs/zpool.cache was not met
       Docs: man:zpool(8)

Mar 26 10:32:35 andromeda1 systemd[1]: zfs-import-scan.service - Import ZFS pools by device scanning was skipped because of an unmet condition check (ConditionFileNotEmpty=!/etc/zfs/zpool.cache).

The base ZFS-import service is active, so maybe that's doing it?
Code:
root@andromeda1:~# systemctl status zfs
zfs-import-cache.service            zfs-import-scan.service             zfs-import.target                   zfs-mount.service                   zfs.target                          zfs-volume-wait.service             
zfs-import@nvme4\x2dmirror.service  zfs-import.service                  zfs-load-key.service                zfs-share.service                   zfs-volumes.target                  zfs-zed.service                     
root@andromeda1:~# systemctl status zfs-import.service
○ zfs-import.service
     Loaded: masked (Reason: Unit zfs-import.service is masked.)
     Active: inactive (dead)
root@andromeda1:~# systemctl status zfs-import.target
● zfs-import.target - ZFS pool import target
     Loaded: loaded (/lib/systemd/system/zfs-import.target; enabled; preset: enabled)
     Active: active since Tue 2024-03-26 10:32:35 CDT; 22min ago
 
I get success from the cache-based importing service (there's no cache from what I can see), but the import service tied to the pool itself fails.
Yes, because it was already imported by the zfs-import-cache.service
Code:
root@andromeda1:~# systemctl status zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
     Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; preset: enabled)
     Active: active (exited) since Tue 2024-03-26 10:32:35 CDT; 12min ago
       Docs: man:zpool(8)
    Process: 1313 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN $ZPOOL_IMPORT_OPTS (code=exited, status=0/SUCCESS)
   Main PID: 1313 (code=exited, status=0/SUCCESS)
        CPU: 20ms

Mar 26 10:32:35 andromeda1 systemd[1]: Starting zfs-import-cache.service - Import ZFS pools by cache file...
Mar 26 10:32:35 andromeda1 systemd[1]: Finished zfs-import-cache.service - Import ZFS pools by cache file.

Aside: is that "unknown kernel command" warning normal?
Yes
So, no errors that I can see there.

But at this point I also have no idea how the nvme pool gets imported, as the cache file isn't present and zfs-import-scan fails as well.
If the cache file were not present, then the zfs-import-cache.service wouldn't have been executed. It has:
Code:
ConditionFileNotEmpty=/etc/zfs/zpool.cache

In the journal log it tells me that it skipped the cache since there is no cache file.
Code:
Mar 26 09:53:14 metal-1 systemd[1]: zfs-import-scan.service - Import ZFS pools by device scanning was skipped because of an unmet condition check (ConditionFileNotEmpty=!/etc/zfs/zpool.cache).
No, there is an additional ! in front of the file name. This negates the condition. The scan service was skipped, not the cache service.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!