I'm working on automatically deploying VMs via the API. To update them I delete the old and clone a new one from an uptodate template. During this I noticed that when deleting several VMs some may leave their disk behind.
I think I had i happen with 5 VMs but for testing this its easier to just use a hundred or so. Then most of em left their disk.
I'm using a 4 Node hyper-converged Ceph cluster running uptodate non-subscriber and call the http API directly from python.
I think I had i happen with 5 VMs but for testing this its easier to just use a hundred or so. Then most of em left their disk.
I'm using a 4 Node hyper-converged Ceph cluster running uptodate non-subscriber and call the http API directly from python.