ZFS mirror issues

squeeky

New Member
Aug 27, 2020
17
0
1
36
Hi All, So I messed up adding a new disk to my ZFS pool. It was previously a mirror but failed and when rebuilding the drive something went wrong.

Long story short I'm running from one disk. A new disk arrived to save me but I have added it incorrectly to the Mirror and it currently looks like this. I need to figure out the command to remove that NVME drive from the rpool and add it back in to the mirror but cant for the life of me find out how to remove it now it has been attached.

NAME STATE READ W RITE CKSUM
rpool ONLINE 0 0 0
ata-SanDisk_X400_M.2_2280_256GB_163508806629-part3 ONLINE 0 0 0
nvme-GIGABYTE_GP-GSM2NE3256GNTD_SN204608906341-part3 ONLINE 0 0 0

Any Advice. Currently it looks like I expanded my rpool :( and I dont see a way to undo this

Thanks
 
Last edited:
Uf, .. you can not.
ZFS does not (yet) support removing of devices.
You basically created RAID 0 (extended ZFS over two disks) and if you remove one, it will be missing half of the data.

There are two options.
Create a new pool and send data over.
Create RAID 10 with 4 disks, by adding mirror to each of the disks. Correctly this time. :)

When working with zpools in such a way, I suggest you do a zpool checkpoint first, which will allow you to undo such "stupid" mistakes. :-)
 
Last edited:
  • Like
Reactions: Stoiko Ivanov
Create a new pool and send data over.
Check out the zfs send and zfs receive commands. With that you can store the data of that old pool to a file, destroy the old pool, create a new pool and restore the data from that file. Of cause you need another drive or network share for that.
 
  • Like
Reactions: Stoiko Ivanov
Check out the zfs send and zfs receive commands. With that you can store the data of that old pool to a file, destroy the old pool, create a new pool and restore the data from that file. Of cause you need another drive or network share for that.
Thanks for your help. My old drive failed is still alive ... yet at 5% lifetime. I think im going to take a direct clone of that disk to a spare portable disk and then start from there.
 
In principle you can remove the second top-level vdev - check out the documentation at `man zpool` - the remove subcommand should do that.
However - i would in any case make a backup of everything first - and would probably also rather recreate the pool (and zfs recv the backup)
 
In principle you can remove the second top-level vdev - check out the documentation at `man zpool` - the remove subcommand should do that.
However - i would in any case make a backup of everything first - and would probably also rather recreate the pool (and zfs recv the backup)
I'm gonna go back to the failed drive, clone it and start writing some sensible instructions for this. That way I can do it over and over to learn the correct way.

So far I have clone the part table the wrong way and now added the mirror wrong. Lol

Original plan was so simple. Move failed drive to nvme, check all is ok. Order another nvme and then resliver that one.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!