ZFS pool not completely removed

LineF

Member
Jul 5, 2021
2
0
6
60
Hello,

I wanted to remove a ZFS pool used as datastore for backups.
With "zpool destroy truenas" I was able to remove the pool.

But now at every restart of the server I get following errors:

Jul 5 19:52:22 pbs systemd[1]: Starting Import ZFS pool ZFS\x2ddisk...
Jul 5 19:52:22 pbs systemd[1]: Condition check resulted in Import ZFS pools by cache file being skipped.
Jul 5 19:52:22 pbs systemd[1]: Starting Import ZFS pool ZFS\x2dtruenas...
Jul 5 19:52:22 pbs zpool[427]: cannot import 'ZFS-truenas': no such pool available
Jul 5 19:52:22 pbs systemd[1]: zfs-import@ZFS\x2dtruenas.service: Main process exited, code=exited, status=1/FAILURE
Jul 5 19:52:22 pbs systemd[1]: zfs-import@ZFS\x2dtruenas.service: Failed with result 'exit-code'.
Jul 5 19:52:22 pbs systemd[1]: Failed to start Import ZFS pool ZFS\x2dtruenas.
Jul 5 19:52:22 pbs systemd[1]: Started Import ZFS pool ZFS\x2ddisk.
Jul 5 19:52:22 pbs systemd[1]: Reached target ZFS pool import target.

Where is the pool truenas still known? How can I clean up this?

Thanks,
Martin
 
Hi,
this seems occur even in proxmox VE, why does the service did not become disabled automatically? Is it a known bug?
did you remove the pool via the UI Node > Disks > ZFS > More > Destroy or /nodes/{node}/disks/zfs/{name} API? If you used zpool destroy directly, that has no way of knowing about the systemd service.
 
Hi,

did you remove the pool via the UI Node > Disks > ZFS > More > Destroy or /nodes/{node}/disks/zfs/{name} API? If you used zpool destroy directly, that has no way of knowing about the systemd service.

I’m not seeing More > Destroy as an option in PBS. What am I missing?
 
Sorry to revive an old thread but I seem to be having the same problem and I haven't figured out to disable the service for my destroyed pools.

I have 3 destroyed pools that I've remade into a new pool on the same drives.

I removed the pools with zpool destory and then created new ones on the same drive.

Now I still have the old systemd unit failures on boot.

UNIT LOAD ACTIVE SUB DESCRIPTION ● zfs-import@[nodename]\x2di\x2dNVME\x2dZFS.service loaded failed failed Import ZFS pool [nodename]\x2di\x2dNVME\x2dZFS ● zfs-import@[nodename]\x2di\x2dZFS.service loaded failed failed Import ZFS pool [nodename]\x2di\x2dZFS ● zfs-import@ZFS\x2dSingle.service loaded failed failed Import ZFS pool ZFS\x2dSingle

The naming for these units seems inconsistent. Some conatin the node name and one doesn't.

I tried to do systemctl disable and it seems to work with no stderr output but on next boot the problem persists.

I'm no expert on systemd but I looked up ways to find a systemd file and i found this.

root@[nodename]:~# systemctl show -P FragmentPath zfs-import@[nodename]\\x2di\\x2dNVME\\x2dZFS.service /lib/systemd/system/zfs-import@.service

So these failed units come from zfs-import@. somehow?


How can I disable/remove these?

Thanks.
 
Sorry to revive an old thread but I seem to be having the same problem and I haven't figured out to disable the service for my destroyed pools.

I have 3 destroyed pools that I've remade into a new pool on the same drives.

I removed the pools with zpool destory and then created new ones on the same drive.

Now I still have the old systemd unit failures on boot.

UNIT LOAD ACTIVE SUB DESCRIPTION ● zfs-import@[nodename]\x2di\x2dNVME\x2dZFS.service loaded failed failed Import ZFS pool [nodename]\x2di\x2dNVME\x2dZFS ● zfs-import@[nodename]\x2di\x2dZFS.service loaded failed failed Import ZFS pool [nodename]\x2di\x2dZFS ● zfs-import@ZFS\x2dSingle.service loaded failed failed Import ZFS pool ZFS\x2dSingle

The naming for these units seems inconsistent. Some conatin the node name and one doesn't.

I tried to do systemctl disable and it seems to work with no stderr output but on next boot the problem persists.

I'm no expert on systemd but I looked up ways to find a systemd file and i found this.

root@[nodename]:~# systemctl show -P FragmentPath zfs-import@[nodename]\\x2di\\x2dNVME\\x2dZFS.service /lib/systemd/system/zfs-import@.service

So these failed units come from zfs-import@. somehow?


How can I disable/remove these?

Thanks.

Have a look in: /etc/systemd/system/zfs-import.target.wants/
 
Have a look in: /etc/systemd/system/zfs-import.target.wants/

That may be what I needed. Thanks.

It looks like this imports a specific pool by name with the %i and it looks like \\x2d is escape for - in the name of the pool.

So if that is the case, one of the units that fail, zfs-import@[nodename]\x2di\x2dZFS.service, is for a pool that I'm still using. It is getting imported on boot since I have VMs running on it and everything is working. How could this systemd unit fail and yet the pool is imported. Should I start a new thread for that one?

root@[nodename]:/etc/systemd/system/zfs-import.target.wants# nano zfs-import@[nodename]\\x2di\\x2dZFS.service

[Unit] Description=Import ZFS pool %i Documentation=man:zpool(8) DefaultDependencies=no After=systemd-udev-settle.service After=cryptsetup.target After=multipathd.target Before=zfs-import.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/zpool import -N -d /dev/disk/by-id -o cachefile=none %I [Install] WantedBy=zfs-import.target