[SOLVED] Removing a ZFS Pool from UI after disk deletion

xantonin

New Member
Jun 9, 2023
8
0
1
I removed a ZFS Pool before I could have a chance to remove it from the UI, and now the UI shows the pool still under the ZFS Tab.

I did the following:

  • When I try to delete it from the UI, it says
    • command 'zpool list -vHPL ssd-raid10' failed: not a valid block device
    • (Note: Removal from "Datacenter" level is OK, it's still at PVE level)
  • Tried to remove it with "pvesm remove ssd-raid10"
    • It says "delete storage failed: storage 'ssd-raid10' does not exist"
  • Tried to re-add it to /etc/pve/storage.cfg and re-run the above - same thing.
  • Disabled the systemctl service
  • Ran "systemctl clean zfs-import@ssd\x2draid10.service"
  • Checked to delete any related service files: (they're gone though)
    • find /etc/systemd/ -name "*x2draid10.service"
    • find /user/lib/ -name "*x2draid10.service"
  • Ran systemctl daemon-reload
  • Ran "service pveproxy restart"
I'm about running out of ideas.
 
I just noticed the device shows up under "zpool status" still.

Code:
root@pve:~# zpool status
  pool: ssd-raid10
 state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
config:

        NAME                                               STATE     READ WRITE CKSUM
        ssd-raid10                                         DEGRADED     0     0     0
          mirror-0                                         DEGRADED     0     6     0
            ata-Samsung_SSD_850_EVO_250GB_S21NNSAFC47172A  REMOVED      0     0     0
            ata-Samsung_SSD_850_EVO_250GB_S21NNSAFC67440K  ONLINE       0     6     0
          mirror-1                                         DEGRADED     0    18     0
            ata-Samsung_SSD_850_EVO_250GB_S21NNSAFC46507N  REMOVED      0     0     0
            ata-Samsung_SSD_850_EVO_250GB_S21NNSAFC21972A  ONLINE       3    18     0

What really happened here is that I kept the disks in PVE, but moved them into the HBA for a VM and then booted the VM which had full control of the HBA through PCI Passthrough. This was all for my lab system and the disks were empty, so I wasn't worried about data loss - you probably don't want to do this without removing the zpool in production first (without destroying it) - which is why we're here. I guess I thought something would unlink the drives/pools once the VM took over. NOPE. My bad.

Anyway, since the pool still shows up, I'll try to nuke it from there. Figures I find a new thing the moment I start a thread for help. I'll log things to help others.

Code:
root@pve:~# zpool destroy -f ssd-raid10
cannot unmount '/ssd-raid10': pool or dataset is busy
could not destroy 'ssd-raid10': could not unmount datasets

root@pve:~# zpool export -f ssd-raid10
cannot unmount '/ssd-raid10': pool or dataset is busy

root@pve:~# mount
ssd-raid10 on /ssd-raid10 type zfs (rw,xattr,noacl)
Oh dear. I definitely messed up something. I'll probably have to shut down the VM or force unmount it. I'm not sure how/why it thinks 2 of the 4 drives are online though. They should all be removed.
 
Last edited:
Ooops I put this in the wrong forum, I meant Proxmox VE: Installation and configuration, can someone move this?

I'll try shutting down my VM once it finishes it's task and update this if that works to remove the pool.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!