Hi
The next thing happened:
There's a cluster with some nodes. One of them gone unreachable (actually the HDD gone bad, but i don't think it is important, why it cannot be reached). During the downtime of that node, a user created a new VM on other node, and it got the lowest VMID from the unreachable node. After some time the problematic node works again (in our case, we changed the hdd, and copied back the backups of those VM-s). But the problem is, now there is two VM with the same VMID (i'd like to mention that, it is possible to create VM with same VMID, but on other node, also when nothing is wrong).
Is there a way to avoid these kind of situations? For example, the master should store all existing VMID-s, and not to assign them in case one of them goes unreachable. And only assign those VMID-s, if the node deleted from the cluster. Or maybe there is some better way, it's just an idea.
And if it matters, it happened with v1.3:
# pveversion -v
pve-manager: 1.3-1 (pve-manager/1.3/4023)
qemu-server: 1.0-14
pve-kernel: 2.6.24-8
pve-kvm: 86-3
pve-firmware: 1
vncterm: 0.9-2
vzctl: 3.0.23-1pve3
vzdump: 1.1-2
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
The next thing happened:
There's a cluster with some nodes. One of them gone unreachable (actually the HDD gone bad, but i don't think it is important, why it cannot be reached). During the downtime of that node, a user created a new VM on other node, and it got the lowest VMID from the unreachable node. After some time the problematic node works again (in our case, we changed the hdd, and copied back the backups of those VM-s). But the problem is, now there is two VM with the same VMID (i'd like to mention that, it is possible to create VM with same VMID, but on other node, also when nothing is wrong).
Is there a way to avoid these kind of situations? For example, the master should store all existing VMID-s, and not to assign them in case one of them goes unreachable. And only assign those VMID-s, if the node deleted from the cluster. Or maybe there is some better way, it's just an idea.
And if it matters, it happened with v1.3:
# pveversion -v
pve-manager: 1.3-1 (pve-manager/1.3/4023)
qemu-server: 1.0-14
pve-kernel: 2.6.24-8
pve-kvm: 86-3
pve-firmware: 1
vncterm: 0.9-2
vzctl: 3.0.23-1pve3
vzdump: 1.1-2
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1