Hello lovely people,
we're currently examining if PVE is a good alternative to our old ESXi environment and Ive set up some test machines (virtualized in ESXi) in order to test some things.
Ive created some Ceph pools, ran some tests (hard shutdown of the PVE nodes simulating failures) etc etc with different pool configurations.
Ive had 1 debian VM inside PVE to test when the ceph cluster stops accepting writes.
After I was done with one ceph pool I wanted to delete it. Sure enough I could!
A few moments later I discovered that my running VM was still on this exact ceph pool! There was no error in PVE "You cant delete storages / ceph pool that are still in use!". Am I right in thinking this is a bug?
Its no big deal because this was just a test VM and now I could test the restore functionality too, but I'd consider this a heavy bug and hopefully would never get this in a real (production) setup!
This was done on PVE 8.3.2 with ceph 19.2 (if thats any help).
we're currently examining if PVE is a good alternative to our old ESXi environment and Ive set up some test machines (virtualized in ESXi) in order to test some things.
Ive created some Ceph pools, ran some tests (hard shutdown of the PVE nodes simulating failures) etc etc with different pool configurations.
Ive had 1 debian VM inside PVE to test when the ceph cluster stops accepting writes.
After I was done with one ceph pool I wanted to delete it. Sure enough I could!
A few moments later I discovered that my running VM was still on this exact ceph pool! There was no error in PVE "You cant delete storages / ceph pool that are still in use!". Am I right in thinking this is a bug?
Its no big deal because this was just a test VM and now I could test the restore functionality too, but I'd consider this a heavy bug and hopefully would never get this in a real (production) setup!
This was done on PVE 8.3.2 with ceph 19.2 (if thats any help).