Multiple pool named rpool after clean install

dsh

Well-Known Member
Jul 6, 2016
45
3
48
35
Hi, I've installed proxmox 6.1 on 2x Intel P4510 (ZFS mirror).

Upon successful installation, it reboots and stuck in initram console because there is multiple pool named rpool.

If I manually import my pool using ID, it boots fine.

How can I delete other pools named rpool.
I've tried rpool destroy "ID" but it gives error stating its pool name should start with alphabet.

Thanks
 
Do you have other disks present which used to be part of a pool named 'rpool'?

What does a zfs import show as detected but not imported pools? The disks should show as well.
 
Other pools name "rpool" shows degraded.

Top 2 pools' disk are just another symlink to same disk used in 3rd pool, which is healthy and one I want to use.

82957486_1069622686712126_9010739222620930048_n.jpg
 
Other pools name "rpool" shows degraded.

That means that this server contains some disks which were part of a rpool sometimes in the past but did not get cleared (e.g., by zpool labelclear /dev/...).

To clear this up you could try to import the real working pool by using it's numerical ID. I.e., try something like:
Code:
zpool import -R / -N rpool 95048311743892303
exit

in the initramfs prompt.

If this boots your system you can check what blockdevices are really used by the "real" rpool, and on all others issue an aforementioned labelclear.
 
I've deleted all partitions with fdisk before installation.

It's right after clean installation.

As you can see two disks of healthy rpool's mirror-0 (nvme-eui.0xxxxxxxxxxxxxxxxxxxxx-part3) are also in first and second rpool.

So, if I do zpool labelclear nvme-eui.0xxxxxxxxxxxxxxxxxxxxx-part3, would they also removed from healthy rpool?
 
I've deleted all partitions with fdisk before installation.

That doesn't clears the ZFS labels, which ain't in the partition table but on the disk...
To clear most things like for ZFS disk or OSDs and various other things the following (very destructive!!) can be done:
Code:
sgdisk --zap-all /dev/...
dd if=/dev/urandom bs=1m count=200 of=/dev/...

But, as mentioned, use-case specific things like zpool labelclear or ceph-volume lvm zap /dev/... can be just enough. :)

It's right after clean installation.

Yeah, you said that. The disks you select to be used for the new installation get zapped, labelcleared and what not, but other disks won't be touched by the installer (could be important data on them) - if any of the other had a ZFS rpool on them this situation arises..

So, if I do zpool labelclear nvme-eui.0xxxxxxxxxxxxxxxxxxxxx-part3, would they also removed from healthy rpool?

if the exact same dev is used in the healthy pool yeah (IIRC), that can happen if it was in a pool with another untouched disks (a bit hard to word non-confusingly, sorry).

In that case it'd be better to then import the broken pools by ID with another name, and then destroy them.

Something like:
Code:
zpool import -f 1106..id-of-broken-pool..000 temppool
zpool destroy -f temppool

should work in that case.
 
Last edited:
  • Like
Reactions: dsh
So, if I do zpool labelclear nvme-eui.0xxxxxxxxxxxxxxxxxxxxx-part3, would they also removed from healthy rpool?

Oh, before I forget, label clearing the disk not part of the healthy pool should also work. The healthy ones normally just show up for the broken pools because the left-over information on the non-cleared disks refers to them (if they were in a pool earlier).

So if you labelclear all devices not being a part of any healthy ZFS pool and reboot you should be good too..
 
  • Like
Reactions: dsh
Thank you so much. Silly me thought deleting partitions would clear ZFS label.

If only I've cleared zfs label before installation, it wouldn't have happened.

But now I know thanks to you.
 
Another thing that can be done is to Secure Erase the SSDs before starting the Proxmox Installer. The secure erase command will tell the disk to wipe all data on it.

I noticed on recent motherboards that the BIOS has an option to issue the command to the disk.

Don't use DBAN or plain dd command on SSDs to wipe them completely as SSDs work quite differently to spinning HDDs in regards to how the data is stored.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!