[FAILED] Faile dto start zfs-import@pool.service - Import ZFS poolf pool (power got cut)

Someyoung Guy

Active Member
Dec 23, 2018
17
1
43
48
ProxMox: 8.4.0
Linux 6.8.12-9-pve (2025-03-16T19:18Z)

I'm getting this upon boot up of PVE 8.4.0 and it seems to be a thing in prior versions also.

Code:
[FAILED] Faile dto start zfs-import@pool.service - Import ZFS poolf pool

I've found a few threads but nothing is helping. The disks are marked as "OK" in the pve01 > Disks:

When typing this out I noticed on the screen shot that the disks aren't mounted, that's probably the issue but why wouldn't they mount? I created a ZFS pool and had stuff running swimmingly until we lost power and perhaps that's the clue. We lost power > battery backup got sucked dry and now this problem.

1747610909627.png

Uh oh...again, when filling this out and checking more things in the threads about this issue one person said go through the journal log and in so doing I found this:

Bash:
May 18 19:19:02 pve01 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.

May 18 19:19:02 pve01 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...

May 18 19:19:02 pve01 systemd[1]: Starting zfs-import@VMs.service - Import ZFS pool VMs...

May 18 19:19:02 pve01 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.

May 18 19:19:03 pve01 kernel: sd 6:0:2:0: [sdc] tag#415 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s

May 18 19:19:03 pve01 kernel: sd 6:0:2:0: [sdc] tag#415 Sense Key : Aborted Command [current] [descriptor]

May 18 19:19:03 pve01 kernel: sd 6:0:2:0: [sdc] tag#415 Add. Sense: Logical block guard check failed

May 18 19:19:03 pve01 kernel: sd 6:0:2:0: [sdc] tag#415 CDB: Read(32)

May 18 19:19:03 pve01 kernel: sd 6:0:2:0: [sdc] tag#415 CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00

May 18 19:19:03 pve01 kernel: sd 6:0:2:0: [sdc] tag#415 CDB[10]: 00 e8 28 c8 00 e8 28 c8 00 00 00 00 00 00 00 18

May 18 19:19:03 pve01 kernel: protection error, dev sdc, sector 15214792 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0

May 18 19:19:03 pve01 kernel: zio pool=VMs vdev=/dev/disk/by-id/scsi-35000c5009408131f-part1 error=84 type=1 offset=7788924928 size=12288 flags=1573264

May 18 19:19:03 pve01 kernel: sd 6:0:1:0: [sdb] tag#8525 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s

May 18 19:19:03 pve01 kernel: sd 6:0:1:0: [sdb] tag#8525 Sense Key : Aborted Command [current] [descriptor]

May 18 19:19:03 pve01 kernel: sd 6:0:1:0: [sdb] tag#8525 Add. Sense: Logical block guard check failed

May 18 19:19:03 pve01 kernel: sd 6:0:1:0: [sdb] tag#8525 CDB: Read(32)

May 18 19:19:03 pve01 kernel: sd 6:0:1:0: [sdb] tag#8525 CDB[00]: 7f 00 00 00 00 00 00 18 00 09 20 00 00 00 00 00

May 18 19:19:03 pve01 kernel: sd 6:0:1:0: [sdb] tag#8525 CDB[10]: 01 02 6a 28 01 02 6a 28 00 00 00 00 00 00 00 18

May 18 19:19:03 pve01 kernel: protection error, dev sdb, sector 16935464 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0

May 18 19:19:03 pve01 kernel: zio pool=VMs vdev=/dev/disk/by-id/scsi-35000c50094089717-part1 error=84 type=1 offset=8669908992 size=12288 flags=1573264

May 18 19:19:03 pve01 kernel: sd 6:0:1:0: [sdb] tag#4330 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s


Our server lost power abruptly so, is it possible each disk needs to be scanned and marked "fixed" or something?

Any help would be appreciated. I think herein lies the problem but I'm unsure of what to do.

They're in a ZFS raid10 config.

Thank you!