cannot import 'wdblue-zfs': pool already exists (newbie help needed)

erodrigues

New Member
Oct 31, 2021
5
1
3
44
Hope someone can shed some light in this proxmox newbie problem...

I am playing around with Proxmox to get familiar with, this means sometime breaking things or deleting and starting over again... One of these tests was to create a ZFS pool then delete it, then I created it again, with the same name "wdblue-zfs".

But now, on reboot, Proxmox halts for some time on an error:

Code:
Nov 14 17:14:28 pve systemd[1]: Starting Import ZFS pools by cache file...
Nov 14 17:14:28 pve systemd[1]: Condition check resulted in Import ZFS pools by device scanning being skipped.
Nov 14 17:14:28 pve systemd[1]: Starting Import ZFS pool wdblue\x2dzfs...
Nov 14 17:14:28 pve zpool[625]: cannot import 'wdblue-zfs': pool already exists
Nov 14 17:14:28 pve systemd[1]: zfs-import@wdblue\x2dzfs.service: Main process exited, code=exited, status=1/FAILURE
Nov 14 17:14:28 pve systemd[1]: zfs-import@wdblue\x2dzfs.service: Failed with result 'exit-code'.
Nov 14 17:14:28 pve systemd[1]: Failed to start Import ZFS pool wdblue\x2dzfs.
Nov 14 17:14:28 pve kernel:  zd16: p1 p2
Nov 14 17:14:28 pve systemd[1]: Finished Import ZFS pools by cache file.
Nov 14 17:14:28 pve systemd[1]: Reached target ZFS pool import target.
Nov 14 17:14:28 pve systemd[1]: Starting Mount ZFS filesystems...
Nov 14 17:14:28 pve systemd[1]: Starting Wait for ZFS Volume (zvol) links in /dev...
Nov 14 17:14:28 pve systemd[1]: Finished Mount ZFS filesystems.
Nov 14 17:14:28 pve zvol_wait[1068]: Testing 3 zvol links
Nov 14 17:14:28 pve zvol_wait[1068]: All zvol links are now present.
Nov 14 17:14:28 pve systemd[1]: Finished Wait for ZFS Volume (zvol) links in /dev.
Nov 14 17:14:28 pve systemd[1]: Reached target ZFS volumes are ready.
Nov 14 17:14:58 pve systemd[1]: systemd-fsckd.service: Succeeded.
Nov 14 17:15:57 pve systemd[1]: dev-disk-by\x2duuid-d6dad002\x2daa72\x2d47fa\x2d8263\x2d5329e9a1614b.device: Job dev-disk-by\x2duuid-d6dad002\x2daa72\x2d47fa\x2d8263\x2d5329e9a1614b.device/start timed out.
Nov 14 17:15:57 pve systemd[1]: Timed out waiting for device /dev/disk/by-uuid/d6dad002-aa72-47fa-8263-5329e9a1614b.
Nov 14 17:15:57 pve systemd[1]: Dependency failed for Mount storage 'wdblue-usb' under /mnt/pve.
Nov 14 17:15:57 pve systemd[1]: mnt-pve-wdblue\x2dusb.mount: Job mnt-pve-wdblue\x2dusb.mount/start failed with result 'dependency'.
Nov 14 17:15:57 pve systemd[1]: dev-disk-by\x2duuid-d6dad002\x2daa72\x2d47fa\x2d8263\x2d5329e9a1614b.device: Job dev-disk-by\x2duuid-d6dad002\x2daa72\x2d47fa\x2d8263\x2d5329e9a1614b.device/start failed with result 'timeout'.
Nov 14 17:15:57 pve systemd[1]: Reached target Local File Systems.
Nov 14 17:15:57 pve systemd[1]: Starting Load AppArmor profiles...
Nov 14 17:15:57 pve systemd[1]: Starting Set console font and keymap...




I removed / created teh ZFS pool via the webUI and I understand that when doing this I should also run some commands related with the zfs cache or to remove some starting service tied to the ZFs pool i just deteked? I followed a few guides but I can't seem to get this fixed :(

I could always just re-install Proxmox but I really wanted to understand why this happens so that if I encounter the problem in the future, or want to destroy/create a ZFS i know how to do it properly. Plus i have a few VMs and a CT which is running already as "final version" ;)

Thanks in advance to anyone that can help,
 
ne of these tests was to create a ZFS pool then delete it, then I created it again, with the same name "wdblue-zfs".
Did you create it on the exact same disks?


I removed / created teh ZFS pool via the webUI
You probably removed it from the Datacenter->Storage panel right? This will only remove the pool from the storage configuration, which tells Proxmox VE which storages are available.

If you want to actually destroy it on disk, you will have to do it manually on the CLI. zpool destroy. And for good measure, you can add a zpool labelclear /dev/... to wipe any ZFS labels from those disks. But be careful to not destroy the wrong disk. Or use the "Wipe Disk" from the GUI.
 
Did you create it on the exact same disks?



You probably removed it from the Datacenter->Storage panel right? This will only remove the pool from the storage configuration, which tells Proxmox VE which storages are available.

If you want to actually destroy it on disk, you will have to do it manually on the CLI. zpool destroy. And for good measure, you can add a zpool labelclear /dev/... to wipe any ZFS labels from those disks. But be careful to not destroy the wrong disk. Or use the "Wipe Disk" from the GUI.
@aaron

Thanks for providing help.

Yes, I did create the exact same disk and same labels ... not a good idea I see.

For my understanding, what would the best practice to delete/destroy a ZFS pool?

Thanks,
ER.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!