Hi, my needs are:
Simple redundancy of storage and elaboration power.
I would love to have:
a) 2 nodes, ceph as shared storage, each node running it's own VMs.
b) Be able to migrate VMs between nodes
c) If a node is own, be able to manually run the VM on the other node without the risk that when the node comes up again it starts it's vm and destroyes them
d) Be able to increase the node number to 3 or 4 without quorum concerns (and that's why I will not use BRDB
The scenario is:
With the above I want to achieve:
- if I have to do maintenance of a node, manually migrate it's VM, turn it off or disconnect from the cluster, do what I need, turn it on and then re-migrate the VM
- if a node does not work (i.e. power supply failure) turn on the VM on the other node, repair the node, turn it on (has NOT to start the VM automatically), re-migrate the VM back
- let's say they are remote to me, if I'm called that some VM are not working, connect in ssh to the survived node, start the remaing vm there, and not fear that if the dead node restarts on it's own it will start it's vm and destroy them (since they have been started by me in the other node)
Is it possible and how? As far as I understand, quorum to 1 can be forced in proxmox but don't know in ceph.
Also important, seems that when the node turns ON, it starts it's vm without checking if they are already running, is it true?.
Thanks a lot
Simple redundancy of storage and elaboration power.
I would love to have:
a) 2 nodes, ceph as shared storage, each node running it's own VMs.
b) Be able to migrate VMs between nodes
c) If a node is own, be able to manually run the VM on the other node without the risk that when the node comes up again it starts it's vm and destroyes them
d) Be able to increase the node number to 3 or 4 without quorum concerns (and that's why I will not use BRDB
The scenario is:
With the above I want to achieve:
- if I have to do maintenance of a node, manually migrate it's VM, turn it off or disconnect from the cluster, do what I need, turn it on and then re-migrate the VM
- if a node does not work (i.e. power supply failure) turn on the VM on the other node, repair the node, turn it on (has NOT to start the VM automatically), re-migrate the VM back
- let's say they are remote to me, if I'm called that some VM are not working, connect in ssh to the survived node, start the remaing vm there, and not fear that if the dead node restarts on it's own it will start it's vm and destroy them (since they have been started by me in the other node)
Is it possible and how? As far as I understand, quorum to 1 can be forced in proxmox but don't know in ceph.
Also important, seems that when the node turns ON, it starts it's vm without checking if they are already running, is it true?.
Thanks a lot