ZFS Storage Inactive Since Reboot

~Rob

New Member
Nov 1, 2025
5
0
1
Hi all.

I have Proxmox PVE 9.0.10 and created a ZFS pool using one disk connected via SATA. Proxmox is running on bare metal.

The Pool was then passed through to an LXC as a Mount Point. Everything was fine until I a power cut and now the pool is missing. I've done some googling and the usual things aren't working.

The status currently shows up as "unknown" in the UI.

lsblk shows the disk (no partitions but this normal based on my reading)
Code:
root@proxmox:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0   3.6T  0 disk

The disk is correct when I list by id:
Code:
ls -l /dev/disk/by-id/ata-WDC_WD43PURZ-74BWPY0_WD-WX42D53460DE
lrwxrwxrwx 1 root root 9 Nov  4 14:33 /dev/disk/by-id/ata-WDC_WD43PURZ-74BWPY0_WD-WX42D53460DE -> ../../sda


zpool status -v and zpool list both say no pools available

The services are not all started, which I've seen in many posts online.
Code:
# systemctl status zfs-import-cache.service zfs-import-scan.service zfs-import.target zfs-import.service
○ zfs-import-cache.service - Import ZFS pools by cache file
     Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; disabled; preset: enabled)
     Active: inactive (dead)
       Docs: man:zpool(8)

○ zfs-import-scan.service - Import ZFS pools by device scanning
     Loaded: loaded (/usr/lib/systemd/system/zfs-import-scan.service; disabled; preset: disabled)
     Active: inactive (dead)
       Docs: man:zpool(8)

● zfs-import.target - ZFS pool import target
     Loaded: loaded (/usr/lib/systemd/system/zfs-import.target; enabled; preset: enabled)
     Active: active since Tue 2025-11-04 14:33:18 GMT; 6h ago
 Invocation: 644d990405d940ba9d852b5971d77a10

Nov 04 14:33:18 proxmox systemd[1]: Reached target zfs-import.target - ZFS pool import target.

○ zfs-import.service
     Loaded: masked (Reason: Unit zfs-import.service is masked.)
     Active: inactive (dead)

I have a (fairly small) cache file from when the pool was created
Code:
# ls -lh /etc/zfs/zpool.cache
-rw-r--r-- 1 root root 1.5K Oct 14 20:48 /etc/zfs/zpool.cache

Importing fails too
Code:
# zpool import -d /dev/disk/by-id/ata-WDC_WD43PURZ-74BWPY0_WD-WX42D53460DE cctv
cannot import 'cctv': no such pool available

Any ideas on what to try next are appreciated!
 
lsblk shows the disk (no partitions but this normal based on my reading)
Code:
root@proxmox:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk
Hi.
Are you sure this is the disk which had ZFS?
For instance, the disk which I created ZFS on, has two very distinctive partitions:

Code:
sdb                            8:16   0 931.5G  0 disk
|-sdb1                         8:17   0 931.5G  0 part
`-sdb9                         8:25   0     8M  0 part
 
Correct. Although I never ran lsblk previously to know if there were ever partitions listed. Could it be the GPT Table has gone walk-abouts?
 
Two disks was always the plan, but I'm a little nervous about using ZFS again if it's lost my data (that's not precious, hence the single disk) after the first power cut.

Is this rare for ZFS?

I do have a UPS, but the batteries need swapping out
 
No it may relay on your selected drive.

I found for WD43PURZ:
https://www.westerndigital.com/products/internal-drives/wd-purple-sata-hdd?sku=WD43PURZ

This drives has bei CMR Storrage Type not SMR! thats good.

In Germany i got WD Red 4 TB, the bad one with SMR Storrage Type, they come direct from WD and died after my first boot only!
The are new.
Until then i band all WD Drives from my systems and use Seagate IronWolf NAS Drive 5.600 RPM and PRO with 7.200 RPM.

If you have 2 more SATA III connectors, the please set up with 2x SSD, PLP and DRAM Cache a zfs mirror special device for this hdd zfs pool.
 
Two disks was always the plan, but I'm a little nervous about using ZFS again if it's lost my data (that's not precious, hence the single disk) after the first power cut.

Is this rare for ZFS?
With SSDs without PLP, it's common that data is lost by an unexpected power loss as they shuffle data around (like trim and flushing SLC cache to TLC/QLC flash) all the time. This is not specific for ZFS but can happen with all filesystems. I don't know why it would happen with a HDD but maybe WD Purple also shuffles data around or it does some unsafe caching?
 
This is not specific for ZFS but can happen with all filesystems.
This is specific for ZFS !! A filesystem other than zfs can get problems after a power outage but I never heard the partitions are gone and never see that a filesystem cannot find it's kind of itself, it just mounts (with journal replay) or say please repair and the repair could be done or even unsuccessfull.
PS: Don't forget zfs write it's information 4 times on each device ... but none of them 4 works here again.
 
Last edited: