zfs-zed sending notifications for nonexistant pool

mambojuice

New Member
Aug 15, 2025
3
1
1
Occasionally (about once per week or so) I get alerts emailed to me from my proxmox host about an unavailable device in a ZFS pool called 'pool1'. The problem is, I don't have any 'pool1' on my host.

zfs-zed log:
Code:
-- Boot f051997ce057462f80774acbc167c7f2 --
Jul 30 21:06:09 poopsmith systemd[1]: Started zfs-zed.service - ZFS Event Daemon (zed).
Jul 30 21:06:09 poopsmith zed[1656]: ZFS Event Daemon 2.2.7-pve2 (PID 1656)
Jul 30 21:06:09 poopsmith zed[1656]: Processing events since eid=0
Jul 30 21:06:09 poopsmith zed[1684]: eid=2 class=config_sync pool='local-raidz'
Jul 30 21:06:09 poopsmith zed[1693]: eid=5 class=config_sync pool='local-raidz'
Jul 30 21:06:09 poopsmith zed[1696]: eid=6 class=statechange pool='pool1' vdev=wwn-0x5002538c407103e3-part9 vdev_state=UNAVAIL

Notification contents:
Code:
ZFS has detected that a device was removed.

impact: Fault tolerance of the pool may be compromised.
   eid: 6
 class: statechange
 state: UNAVAIL
  host: poopsmith
  time: 2025-08-14 11:59:51-0400
 vpath: /dev/disk/by-id/wwn-0x5002538c407103e3-part9
 vphys: pci-0000:02:00.1-ata-6.0
 vguid: 0xD8A7FC5217D6260D
 devid: ata-SAMSUNG_MZ7LM1T9HMJP-00003_S3LFNX0J700367-part9
  pool: pool1 (0xDE4B3E03A1C01471)

The only thing that concerns me and prevents me from completely ignoring these alerts is that the device in question ata-SAMSUNG_MZ7LM1T9HMJP-00003_S3LFNX0J700367 does exist and has a part9 partition.

Code:
root@poopsmith:/# zpool list -v
NAME                                                 SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
local-raidz                                         6.98T   336G  6.66T        -         -     9%     4%  1.00x    ONLINE  -
  raidz1-0                                          6.98T   336G  6.66T        -         -     9%  4.69%      -    ONLINE
    ata-SAMSUNG_MZ7LM1T9HMJP-00003_S3LFNX0K308487   1.75T      -      -        -         -      -      -      -    ONLINE
    ata-SAMSUNG_MZ7LM1T9HMJP-00003_S3LFNX0K501065   1.75T      -      -        -         -      -      -      -    ONLINE
    ata-SAMSUNG_MZ7LM1T9HMJP-00005_S3B4NX0J600078D  1.75T      -      -        -         -      -      -      -    ONLINE
    ata-SAMSUNG_MZ7LM1T9HMJP-00003_S3LFNX0J700367   1.75T      -      -        -         -      -      -      -    ONLINE

Any idea where this phantom pool might be defined?
 
I believe I've found the trigger, it's the zfs-import-scan service.

systemctl status zfs-import-scan.service
...
Aug 17 14:15:58 poopsmith systemd[1]: Starting zfs-import-scan.service - Import ZFS pools by device scanning...
Aug 17 14:16:00 poopsmith zpool[1419]: internal error: cannot import 'pool1': Value too large for defined data type
Aug 17 14:16:00 poopsmith systemd[1]: zfs-import-scan.service: Main process exited, code=killed, status=6/ABRT
Aug 17 14:16:00 poopsmith systemd[1]: zfs-import-scan.service: Failed with result 'signal'.
Aug 17 14:16:00 poopsmith systemd[1]: Failed to start zfs-import-scan.service - Import ZFS pools by device scanning.

Manually restarting this service will trigger another alert.

Still unsure where 'pool1' is coming from. I've cleared the zpool cache and rebooted but it still has 'pool1' defined somewhere.
 
Digging further into the zfs import scan I believe I've figured the issue out. I realize I'm just talking to myself in this thread, but I'll post everything I did for posterity.

It looks like one of my disks may still have some metadata left over from a previous system it was in.

root@poopsmith:/tmp# zpool import -d /dev/disk/by-id
pool: pool1
id: 16017964684992844913
state: UNAVAIL
status: One or more devices contains corrupted data.

action: The pool cannot be imported due to damaged devices or data.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E

config:

pool1 UNAVAIL insufficient replicas
mirror-1 UNAVAIL insufficient replicas
ada5 UNAVAIL
wwn-0x5002538c407103e3 UNAVAIL corrupted data


Looking at all 4 disks in my pool with zdb does seem to show there's metadata from my old NAS on the first disk.


root@poopsmith:/tmp# zdb -l /dev/disk/by-id/ata-SAMSUNG_MZ7LM1T9HMJP-00003_S3LFNX0J700367
failed to unpack label 0
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
version: 5000
name: 'pool1'
state: 1
txg: 578336
pool_guid: 16017964684992844913
errata: 0
hostid: 3109167131
hostname: 'home-nas'
top_guid: 15800391197736897674
guid: 15611724062820541965
vdev_children: 2
vdev_tree:
type: 'mirror'
id: 1
guid: 15800391197736897674
metaslab_array: 385
metaslab_shift: 34
ashift: 12
asize: 1760346505216
is_log: 0
create_txg: 14
children[0]:
type: 'disk'
id: 0
guid: 17750071472291287515
path: '/dev/ada5'
whole_disk: 1
DTL: 1101
create_txg: 14
children[1]:
type: 'disk'
id: 1
guid: 15611724062820541965
path: '/dev/ada6'
whole_disk: 1
DTL: 1100
create_txg: 14
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
labels = 2 3

root@poopsmith:/tmp# zdb -l /dev/disk/by-id/ata-SAMSUNG_MZ7LM1T9HMJP-00003_S3LFNX0K308487
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3

root@poopsmith:/tmp# zdb -l /dev/disk/by-id/ata-SAMSUNG_MZ7LM1T9HMJP-00003_S3LFNX0K501065
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3

root@poopsmith:/tmp# zdb -l /dev/disk/by-id/ata-SAMSUNG_MZ7LM1T9HMJP-00005_S3B4NX0J600078D
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3

I ran zpool labelclear -f /dev/disk/by-id/ata-SAMSUNG_MZ7LM1T9HMJP-00003_S3LFNX0J700367 and was able to remove all remaining traces of pool1.

Verified that the zfs-import-scan service now launches without triggering any error and email notification

systemctl status zfs-import-scan
...
Aug 17 19:11:37 poopsmith systemd[1]: Starting zfs-import-scan.service - Import ZFS pools by device scanning...
Aug 17 19:11:37 poopsmith zpool[163026]: no pools available to import
Aug 17 19:11:37 poopsmith systemd[1]: Finished zfs-import-scan.service - Import ZFS pools by device scanning.

I didn't take the disk offline or anything, though maybe those who are more cautious with their data may want to at least verify they have a recent good backup.
 
  • Like
Reactions: waltar