[SOLVED] Remove disk from ZFS pool

tuxillo

Renowned Member
Mar 2, 2010
56
6
73
Hi,

By mistake I added a disk to my pool and now I cannot remove. Is there any way to do so?

Code:
root@pve01:~# zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 0B in 0 days 03:48:05 with 0 errors on Wed Mar 24 23:54:29 2021
config:

        NAME                            STATE     READ WRITE CKSUM
        rpool                           ONLINE       0     0     0
          sda3                          ONLINE       0     0     0
          wwn-0x5000c500b00df01a-part3  ONLINE       0     0     0

When I try to remove the wwn one, I get the following message:

Code:
root@pve01:~# zpool remove rpool wwn-0x5000c500b00df01a-part3
cannot remove wwn-0x5000c500b00df01a-part3: root pool can not have removed devices, because GRUB does not understand them

Thanks,
 
Have you tried zpool detach rpool wwn-0x5000c500b00df01a-part3 instead of remove?
What version of Proxmox and ZFS are you using? Maybe it is possible to evict all data from a vdev with the latests ZFS, I'm not sure about that.
Alternatives are creating a new pool, copy everything and rename it to rpool, which is quite a bit of work, or to reinstall Proxmox and restore from backup.
 
Yes, I also tried that and didn't work:

Code:
root@pve01:~# zpool detach rpool wwn-0x5000c500b00df01a-part3
cannot detach wwn-0x5000c500b00df01a-part3: only applicable to mirror and replacing vdevs

Proxmox version and zfs versions:

Code:
root@pve01:~# zfs version
zfs-0.8.5-pve1
zfs-kmod-0.8.5-pve1
root@pve01:~# pveversion
pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve)
 
Are you using GRUB to boot? If so, I don't think you can fix this without creating a new pool, copy everything and rename that pool to rpool, which is quite involved and I would suggest reinstalling Proxmox instead. If you are using systemd-boot, maybe someone known a way to force the removal and ignore the warning about GRUB?
EDIT: If you apt-get update and dist-upgrade your Proxmox, you should get newer ZFS modules (version 2.0.4). Do you still get the same error on zpool remove?
 
Last edited:
  • Like
Reactions: tuxillo
Yes, I'm using GRUB to boot.

I've upgraded proxmox but I'm afraid to do the 'zpool remove' now. I'll try to backup everything first, then I'll try.
 
Eventually I removed the disk from the pool and then, with the remark from @avw, I could attach it as mirror:

Code:
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 07:22:19 with 0 errors on Sun Apr 11 07:46:20 2021
remove: Removal of vdev 1 copied 415G in 1h0m, completed on Mon Apr 19 03:12:46 2021
        1.41M memory used for removed device mappings
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          sda3        ONLINE       0     0     0

For anyone reading this thread later on, the solution is having ZFS modules (version 2.0.4) or newer.

Thanks!
 
  • Like
Reactions: janssensm

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!