Hi,
we've probably faced a bug.
in a cluster setup with shared storage ( clvm over iscsi), the newly created lv isn't deactived after auto migration:
you can reproduce this the following way:
lets say you have a kvm template on node1 and do a "clone to node2", the new vm is firstly cloned on node1 and after that it gets moved to node2. the problem now is, that the
logical volume (lv) of the new vm (for example vm-101-disk-1) isn't deactivated on node1 after it is automatically moved to node2 (after the cloning process). if you now delete the vm on node2
node1 does not recognize this, and you will run into an error if you try to create a new vm with the same id after you have deleted the vm on node2.
reproduce:
- create a cluster with min 2 nodes and iscsi storage with clvm on top
- create a kvm vm on node1 on the clvm storage and define it as template (right click -> convert to template)
- now create a clone of the template. choose as destination: node2
- after cloning is completed and new vm is located on node2, you will see on node1 (via ssh and 'lvdisplay' or 'dminfo show') that vm-101-disk-1 is still active
- delete the new vm on node2
- try to create a new vm with the id of the just removed vm
- you will now get a lvm error
greets,
patrick
we've probably faced a bug.
in a cluster setup with shared storage ( clvm over iscsi), the newly created lv isn't deactived after auto migration:
you can reproduce this the following way:
lets say you have a kvm template on node1 and do a "clone to node2", the new vm is firstly cloned on node1 and after that it gets moved to node2. the problem now is, that the
logical volume (lv) of the new vm (for example vm-101-disk-1) isn't deactivated on node1 after it is automatically moved to node2 (after the cloning process). if you now delete the vm on node2
node1 does not recognize this, and you will run into an error if you try to create a new vm with the same id after you have deleted the vm on node2.
reproduce:
- create a cluster with min 2 nodes and iscsi storage with clvm on top
- create a kvm vm on node1 on the clvm storage and define it as template (right click -> convert to template)
- now create a clone of the template. choose as destination: node2
- after cloning is completed and new vm is located on node2, you will see on node1 (via ssh and 'lvdisplay' or 'dminfo show') that vm-101-disk-1 is still active
- delete the new vm on node2
- try to create a new vm with the id of the just removed vm
- you will now get a lvm error
greets,
patrick