ZFS complaining about phantom pool

Doug Meredith

New Member
Dec 22, 2021
5
1
3
57
Today I removed a PVE node from the cluster, and did a fresh install. I've re-added it to the cluster, have Ceph working again, and all is well. Except.... I keep getting email messages like this:

1682017224533.png
This is a fresh install with only three disks on the system. Two brand-new disks in a ZFS mirror (rpool) plus a Ceph data disk. This system was has never had a pool3 on it, although another node in the PVE cluster did in the past. Neither zpool status nor zpool import show any sign of a pool3. Additionally, the device referenced in the email message seems to be the Ceph disk. This is a full, unpartitioned disk, used by Ceph. Before adding it to Ceph, it was wipped using the PVE UI.

I'm lost. Why is it possibly talking about a pool that has never existed on this host? Why is ZFS even looking at the disk used by Ceph?

My best guess is that the Ceph disk was formerly in another host, and was part of pool3, and than some remenant of ZFS info remains on the disk. No idea how to verify this, or what to do about it. Any help would be appreciated.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!