ZFS Storage Inactive Since Reboot

~Rob

New Member
Nov 1, 2025
6
0
1
Hi all.

I have Proxmox PVE 9.0.10 and created a ZFS pool using one disk connected via SATA. Proxmox is running on bare metal.

The Pool was then passed through to an LXC as a Mount Point. Everything was fine until I a power cut and now the pool is missing. I've done some googling and the usual things aren't working.

The status currently shows up as "unknown" in the UI.

lsblk shows the disk (no partitions but this normal based on my reading)
Code:
root@proxmox:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0   3.6T  0 disk

The disk is correct when I list by id:
Code:
ls -l /dev/disk/by-id/ata-WDC_WD43PURZ-74BWPY0_WD-WX42D53460DE
lrwxrwxrwx 1 root root 9 Nov  4 14:33 /dev/disk/by-id/ata-WDC_WD43PURZ-74BWPY0_WD-WX42D53460DE -> ../../sda


zpool status -v and zpool list both say no pools available

The services are not all started, which I've seen in many posts online.
Code:
# systemctl status zfs-import-cache.service zfs-import-scan.service zfs-import.target zfs-import.service
○ zfs-import-cache.service - Import ZFS pools by cache file
     Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; disabled; preset: enabled)
     Active: inactive (dead)
       Docs: man:zpool(8)

○ zfs-import-scan.service - Import ZFS pools by device scanning
     Loaded: loaded (/usr/lib/systemd/system/zfs-import-scan.service; disabled; preset: disabled)
     Active: inactive (dead)
       Docs: man:zpool(8)

● zfs-import.target - ZFS pool import target
     Loaded: loaded (/usr/lib/systemd/system/zfs-import.target; enabled; preset: enabled)
     Active: active since Tue 2025-11-04 14:33:18 GMT; 6h ago
 Invocation: 644d990405d940ba9d852b5971d77a10

Nov 04 14:33:18 proxmox systemd[1]: Reached target zfs-import.target - ZFS pool import target.

○ zfs-import.service
     Loaded: masked (Reason: Unit zfs-import.service is masked.)
     Active: inactive (dead)

I have a (fairly small) cache file from when the pool was created
Code:
# ls -lh /etc/zfs/zpool.cache
-rw-r--r-- 1 root root 1.5K Oct 14 20:48 /etc/zfs/zpool.cache

Importing fails too
Code:
# zpool import -d /dev/disk/by-id/ata-WDC_WD43PURZ-74BWPY0_WD-WX42D53460DE cctv
cannot import 'cctv': no such pool available

Any ideas on what to try next are appreciated!
 
lsblk shows the disk (no partitions but this normal based on my reading)
Code:
root@proxmox:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk
Hi.
Are you sure this is the disk which had ZFS?
For instance, the disk which I created ZFS on, has two very distinctive partitions:

Code:
sdb                            8:16   0 931.5G  0 disk
|-sdb1                         8:17   0 931.5G  0 part
`-sdb9                         8:25   0     8M  0 part
 
Correct. Although I never ran lsblk previously to know if there were ever partitions listed. Could it be the GPT Table has gone walk-abouts?
 
Two disks was always the plan, but I'm a little nervous about using ZFS again if it's lost my data (that's not precious, hence the single disk) after the first power cut.

Is this rare for ZFS?

I do have a UPS, but the batteries need swapping out
 
No it may relay on your selected drive.

I found for WD43PURZ:
https://www.westerndigital.com/products/internal-drives/wd-purple-sata-hdd?sku=WD43PURZ

This drives has bei CMR Storrage Type not SMR! thats good.

In Germany i got WD Red 4 TB, the bad one with SMR Storrage Type, they come direct from WD and died after my first boot only!
The are new.
Until then i band all WD Drives from my systems and use Seagate IronWolf NAS Drive 5.600 RPM and PRO with 7.200 RPM.

If you have 2 more SATA III connectors, the please set up with 2x SSD, PLP and DRAM Cache a zfs mirror special device for this hdd zfs pool.
 
Two disks was always the plan, but I'm a little nervous about using ZFS again if it's lost my data (that's not precious, hence the single disk) after the first power cut.

Is this rare for ZFS?
With SSDs without PLP, it's common that data is lost by an unexpected power loss as they shuffle data around (like trim and flushing SLC cache to TLC/QLC flash) all the time. This is not specific for ZFS but can happen with all filesystems. I don't know why it would happen with a HDD but maybe WD Purple also shuffles data around or it does some unsafe caching?
 
This is not specific for ZFS but can happen with all filesystems.
This is specific for ZFS !! A filesystem other than zfs can get problems after a power outage but I never heard the partitions are gone and never see that a filesystem cannot find it's kind of itself, it just mounts (with journal replay) or say please repair and the repair could be done or even unsuccessfull.
PS: Don't forget zfs write it's information 4 times on each device ... but none of them 4 works here again.
 
Last edited:
I'm looking through the task history and maybe it was never "right" in the first place. I wiped the disk (previously had an NTFS partition) and there's an error on one of the partitions. Here are the tasks in the order they happened.

Disk Wipe
Code:
found child partitions to wipe: /dev/sda2, /dev/sda1
wiping block device /dev/sda
/dev/sda2: 8 bytes were erased at offset 0x00000003 (ntfs): 4e 54 46 53 20 20 20 20
/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sda: calling ioctl to re-read partition table: Device or resource busy
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.24523 s, 168 MB/s
TASK OK

sda2 Partition Wipe:
Code:
wiping block device /dev/sda2
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.26299 s, 166 MB/s
unable to change partition type for /dev/sda2 - disk '/dev/sda' is not GPT partitioned
TASK OK

sda1 Partition Wipe (ERROR):
Code:
wiping block device /dev/sda1
TASK ERROR: error wiping '/dev/sda1': dd: invalid number: '15.9833984375'

Create Pool (Note complaint about sda2 on the first line):
Code:
unable to change partition type for /dev/sda2 - disk '/dev/sda' is not GPT partitioned
# /sbin/zpool create -o ashift=12 cctv /dev/disk/by-id/ata-WDC_WD43PURZ-74BWPY0_WD-WX42D53460DE-part2
# /sbin/zfs set compression=on cctv
# systemctl enable zfs-import@cctv.service
Created symlink '/etc/systemd/system/zfs-import.target.wants/zfs-import@cctv.service' -> '/usr/lib/systemd/system/zfs-import@.service'.
TASK OK

It's obvious that sda had partitions at one point as that's where the zpool was created judging by the second line.

So I'm going to try formatting the disk again, unless this sparks any other ideas?
 
Did you look with gdisk /dev/sda to see if the backup copy of the partition table might be recoverable?
unable to change partition type for /dev/sda2 - disk '/dev/sda' is not GPT partitioned
Or maybe the drive had partitions before you wiped the drive but you did not do a partprobe (or reboot) and therefore the Linux kernel did not notice that the GPT and partitions were already gone. That could mean that the ZFS vdev is still present on the drive but the starting position is unknown (unless you can recover the partitioning before the wipe).