Reinstall over existing rpool (wiping all disks)

mathx

Renowned Member
Jan 15, 2014
177
3
83
I've got a remote machine with remote console and want the quickest way to reinstall overtop of the old disks (losing all data, I've moved all containers elsewhere).

Problem is if I dont wipe the disks, the installer barfs on rpool already existing. I am pretty sure I cant zpool export rpool when / is on rpool.

If i install overtop of something renamed 'oldrpool' it works, but of course at firstboot there's now the old rpool and the new rpool both named rpool.

My method was to split the pools, remember the id of the rpool still named that, remember the disk dev name of the one renamed, and install on it (hoping the installer kernel numbers the disks /dev/sd[a-z] the same of course).

As expected it barfed and dropped to singleusermode shell because of two rpools. This is where I zpool import -f $id rpool oldrpool to the offending old rpool, leaving the new rpool as the only one. Reexport it (to ensure it's mountpoints are not holding onto /) then just type exit - it'll boot into your new install where you can use sgdisk to copy the partitions from the new install to the oldrpool disk and reattach the right partition.

Any easier method?
 
Last edited:
Currently the following option might be a tad nicer:
* start the installer in debug mode
* hit ctrl+d to exit the first debug-shell
* in the next debug shell you have ZFS available
* If you're absolutely sure that you have everything you will need from the disks backed up and save you can use that shell to clear all disks of their labels and partition tables:
Code:
modprobe zfs
zpool labelclear -f /dev/$part
sgdisk -Z /dev/$disk
you need to replace /dev/$part with all partitions that were part of an rpool (`lsblk -o PATH,UUID,FSTYPE,PARTTYPE,LABEL` should help in identfying those partitions) .
you need to replace /dev/$disk with all disks which have a partitiontable which might be problematic
this needs to be done for all disks you do not select to be part of the rpool afair

then you can either continue with the installer, or reboot without debug mode

I hope this helps!
 
Thanks for the shortcut will try it next time.

In my case, not too bad now knowing what to do. One thing that did happen is that I could not zpool export oldrpool once I had booted to the new install, but a reboot fixed that (then I could labelclear and reattach it, etc.). Could not for the life of me get it to not report 'dataset busy' despite no zfs mounts and of course nothing showing up about rpool in /proc/*/mounts.

HInt: I tend to attach via /dev/disk/by-id/* instead of /dev/sd* so that if moved to another machine with different hardware, the disks come up the same. (I even detached the /dev/sdc I installed on and reattached it as its disk-id, allowing a resilver of course).

Dont forget to grub-install /dev/disk/by-id/whatever or /dev/sd* if you use either of our methods, to ensure this second disk is bootable as well.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!