PVE 9 install - ZFS pool named 'rpool' already exists

nethub

New Member
Sep 12, 2024
5
2
3
We are experiencing a similar issue as described in that post; however, the proposed solution did not resolve our case. Can someone help?


Summary:
When reinstalling Proxmox VE (PVE) 9 with ZFS on a disk that has been wiped using wipefs or a quick zero operation, the installer displays a warning about an existing ZFS pool named 'rpool', despite the disk being wiped. This warning does not appear in PVE 8 under similar conditions or when reinstalling PVE 9 without wiping the disk.


Environment:
Proxmox VE Version: 9.0
Filesystem: ZFS
Hardware: Various (issue observed across multiple systems with different disk configurations)
Disk Wipe Tools: wipefs command or quick zero (e.g., dd zeroing first few MB of the disk)


Steps to Reproduce:
  1. Perform a fresh installation of PVE 9 with ZFS on a disk
  2. Wipe the disk using either: wipefs command or quick zero using software
  3. Reinstall PVE 9 with ZFS on the same disk.
  4. Installer warning (attached screenshot): "A ZFS pool named 'rpool' (id xxxxxxxx) already exists on the system. Do you want to rename the pool to 'pool-OLD-xxxxxxxx' before continuing or cancel the installation?"
  5. Click "OK" to proceed with the installation.

Expected Behavior
After wiping the disk with wipefs or a quick zero, the PVE 9 installer should detect no any residual ZFS pool metadata, as the disk is expected to be clean.

The installation should proceed without prompting about an existing 'rpool'.


Actual Behavior
The PVE 9 installer detects an existing ZFS pool named 'rpool' and displays the warning message.

Clicking "OK" renames the detected pool to 'pool-OLD-xxxxxxxx', and the installation completes successfully.
The installed PVE 9 system appears to function normally after the rename.


Additional Tests
PVE 9 → Wipe → PVE 8: No warning about existing ZFS pool.
PVE 8 → Wipe → PVE 9: No warning about existing ZFS pool.
PVE 8 → Wipe → PVE 8: No warning about existing ZFS pool.
PVE 9 → Reinstall PVE 9 (no wipe): No warning about existing ZFS pool.

These tests suggest the issue is specific to PVE 9’s installer when reinstalling on a wiped disk previously used for PVE 9 with ZFS.
 

Attachments

  • pve9_zfs_warning.jpg
    pve9_zfs_warning.jpg
    24 KB · Views: 4
I can confirm this behaviour. Regardless of what kind of drives are used (NVME or SSD) and how the drives have been wiped, the installer comes up with this error message.

I also used various partition tools which deleted/erased everything on the disks prior, yet the message re-appeared during setup.

What confused me was the following: if you choose mirror for the setup and don’t re-arrange the drive order in the list of available drives (for example: first two drives are NVME and the target drives are sda and sdb on 3rd and 4th position) while only deselecting drives which shall not be part of the mirror, the installer continues without error. As soon as I re-arrange the drive order in the list („moving“ sda and sdb to 1st and 2nd position and set all other drives to „don‘t use) the message appears.

Tested on Intel and AMD systems with either onboard S-ATA or HBAs.
 
Last edited:
dding the first MB of the disk may not be enough. With GPT, we also have a copy of the partition table at the end. If possible use blkdiscard on the blockdevice (e.g. blkdiscard /dev/sda)
Is this tool available during install? I have exactly the same issue, and it's just buggin me that I wiped the disks - even took them out and formatted them again on a windows system, ran diskpart and found the 17kb partitions, deleted all partitions, formatted again - and STILL this message comes up. I really want my drives completely clean before installing proxmox - for the 30th time.
I'm a total noob, so it's no big deal breaking things and reinstalling. At least I'm learning things, but this one is really frustrating.
BTW, I'm using twin samsung ssd's for the root ZFS installation.
 
First use ' zpool labelclear ' - then use ' wipefs -a ', both on /dev/sdX
Thanks for the information. I tried pressing Ctrl-Alt-F3 to enter command mode and ran zpool labelclear -f /dev/sda, but it returned the error: failed to clear label for /dev/sda. The only command that works for me is blkdiscard.