Today I removed a PVE node from the cluster, and did a fresh install. I've re-added it to the cluster, have Ceph working again, and all is well. Except.... I keep getting email messages like this:
This is a fresh install with only three disks on the system. Two brand-new disks in a ZFS mirror (rpool) plus a Ceph data disk. This system was has never had a pool3 on it, although another node in the PVE cluster did in the past. Neither zpool status nor zpool import show any sign of a pool3. Additionally, the device referenced in the email message seems to be the Ceph disk. This is a full, unpartitioned disk, used by Ceph. Before adding it to Ceph, it was wipped using the PVE UI.
I'm lost. Why is it possibly talking about a pool that has never existed on this host? Why is ZFS even looking at the disk used by Ceph?
My best guess is that the Ceph disk was formerly in another host, and was part of pool3, and than some remenant of ZFS info remains on the disk. No idea how to verify this, or what to do about it. Any help would be appreciated.
This is a fresh install with only three disks on the system. Two brand-new disks in a ZFS mirror (rpool) plus a Ceph data disk. This system was has never had a pool3 on it, although another node in the PVE cluster did in the past. Neither zpool status nor zpool import show any sign of a pool3. Additionally, the device referenced in the email message seems to be the Ceph disk. This is a full, unpartitioned disk, used by Ceph. Before adding it to Ceph, it was wipped using the PVE UI.
I'm lost. Why is it possibly talking about a pool that has never existed on this host? Why is ZFS even looking at the disk used by Ceph?
My best guess is that the Ceph disk was formerly in another host, and was part of pool3, and than some remenant of ZFS info remains on the disk. No idea how to verify this, or what to do about it. Any help would be appreciated.