Hello Proxmox team,
I have an old 2 node cluster running proxmox 3.4-6 using lvm on iscsi.
On my new setup (proxmox 4.1) I use ceph and started to migrate a few test vm's. On the new setup i also have the iscsi / lvm added to the system but not actively used.
When I define a new VM on the new cluster using the ceph storage with the same id as a vm on the old cluster and then I destroy the image, then it removes also the disk on lvm. e.g. I have rbd:vm-100-disk-1 and lvm:vm-100-disk-1.
So the qm destroy command does not check what storage is used in the vm config but it wil search and destroy all the disks with the same name even when not defined in the vm configuration.
Is this a bug or should i kill the lvm on the new cluster?
With kind regards,
William van de Velde
I have an old 2 node cluster running proxmox 3.4-6 using lvm on iscsi.
On my new setup (proxmox 4.1) I use ceph and started to migrate a few test vm's. On the new setup i also have the iscsi / lvm added to the system but not actively used.
When I define a new VM on the new cluster using the ceph storage with the same id as a vm on the old cluster and then I destroy the image, then it removes also the disk on lvm. e.g. I have rbd:vm-100-disk-1 and lvm:vm-100-disk-1.
So the qm destroy command does not check what storage is used in the vm config but it wil search and destroy all the disks with the same name even when not defined in the vm configuration.
Is this a bug or should i kill the lvm on the new cluster?
With kind regards,
William van de Velde