Dear All,
I have two Proxmox 3.4 clusters in different racks (at different DC's) each running 2 node HA using a third Quorum disk for each cluster. Both clusters also run DRDB serving each 2 node Proxmox Server.
I want to migrate everything to Promox 5.2 and do away with the need for DRDB.
I've noticed a few issues with the new setup and would appreciate some inputs.
1) There does not appear to be a Quorum disk / mkqdisk option in the latest Proxmox 5.2/ Corosync build. I've seen reference to people using a Raspberry PI or Intel NUC's as a third node. Is there a simpler solution?
If I really must have a fully functional third node for each cluster, can I add the nodes from the other cluster using a OpenVPN tunnel? In this case what happens if I lose a rack housing two of the nodes? Will the HA still work correctly?
2) I want to use lvm-thin storage for each node and rely upon replication/HA, but in testing I have found only the Migration option in Proxmox can move a VM from one server to another when the storage is local such as lvm-thin. If I try to use the Clone option, the target storage has to be shared if I want to Clone a VM to another server. Currently the workaround seems to be to Migrate a VM to another server in the cluster, then Clone it and Migrate it back. Does anyone have a simpler solution?
Your inputs are most gratefully received.
I have two Proxmox 3.4 clusters in different racks (at different DC's) each running 2 node HA using a third Quorum disk for each cluster. Both clusters also run DRDB serving each 2 node Proxmox Server.
I want to migrate everything to Promox 5.2 and do away with the need for DRDB.
I've noticed a few issues with the new setup and would appreciate some inputs.
1) There does not appear to be a Quorum disk / mkqdisk option in the latest Proxmox 5.2/ Corosync build. I've seen reference to people using a Raspberry PI or Intel NUC's as a third node. Is there a simpler solution?
If I really must have a fully functional third node for each cluster, can I add the nodes from the other cluster using a OpenVPN tunnel? In this case what happens if I lose a rack housing two of the nodes? Will the HA still work correctly?
2) I want to use lvm-thin storage for each node and rely upon replication/HA, but in testing I have found only the Migration option in Proxmox can move a VM from one server to another when the storage is local such as lvm-thin. If I try to use the Clone option, the target storage has to be shared if I want to Clone a VM to another server. Currently the workaround seems to be to Migrate a VM to another server in the cluster, then Clone it and Migrate it back. Does anyone have a simpler solution?
Your inputs are most gratefully received.