Hi,
3 nodes cluster here, nodes on zfs raid1 (OS) and Ceph (3x2 OSDs)
Strange behaviour:
Where can I check if some config remained corrupted from previous 100?
Thanks,
3 nodes cluster here, nodes on zfs raid1 (OS) and Ceph (3x2 OSDs)
Strange behaviour:
- I create two VMs (100 and 101) with disks vm-100-disk and vm-101-disk on Ceph.
- The VMs get corrupted
- I try to eliminate them, 101 gets eliminated, 100 stalls but finally I get to eliminate it.
- vm-100-disk rimains corrupted on Ceph
- I eliminmate the corrupted disk with rbd commands
- Trying to re-create the VMs with same IDs and same disk names, 101 gets created in seconds, while 100 stalls in creation
Where can I check if some config remained corrupted from previous 100?
Thanks,