Moving cluster to new location

brucexx

Renowned Member
Mar 19, 2015
236
9
83
I am moving a cluster of 4 nodes to a new location. No need to work about the external storage as I can fit all the VMs on local drives. I was thinking to move one node and do expected 1 so it can work by itself in teh new location and the remaining 3 nodes would work in teh old location then slowly migrate all the VMs on the weekend when it is slow and I can oversubscribe. Then move the remaining 3 nodes to the new location which should created automatically the cluster of 4 nodes again.

Am I missing something ? Has anybody done that before taht way or maybe somebody can suggest a better solution.

Thank you
 
Did you have the same IP's on the new location and are the two locations connected with not too much latency?
Then its easy, just move node for node without changing anything.

If IP's will change, and/or latency is too large for proper cluster operation, then its more complicated, as corosync config will not fit in the new location. Then its probably easier to break out nodes from the cluster and setup a new cluster.
 
We will be moving everything so same subnets/internal ips and relatively small latency >10ms only public ips will change. The reason why we are moving is that the internet in our current datacenter is flaky and goes down more often that I would like to see, it is usually for a very short period of time but still. I would not risk it to split the current cluster and still keep it connected as both sites will hold VMs in production.

What is wrong with keeping it the way as I described with expected 1 on the moved cluster , wouldm't that work ? Any issue ? Would the cluster not re-connect after two week of one node being "away" ?
 
Hmm, I didn't tried it this way until now, I always had all the VLAN's available over the locations (with very low latencies), so it was like a reboot (just had to set noout for CEPH OSD's to avoid the CEPH rebalancing).

But as long as the Cluster IP's do not change it sounds that your proposal could work.
 
Thanks Kalus, that is what I though as well.

Anybody else want to chip in ? Any words of wisdom or "last famous words" ...before the cluster crashes :)

Thanks for any advice
 
Again I am moving our 4 node cluster to a new location. I though I could just move one node (node4) to the new location and slowly move the VMs over a period of two weeks to that node. Then move the remaining 3 nodes and they would resync.

I am worried about the resyncing, I see the from teh perspective of the one moved node the remaining nodes are unavailable and the VMs as the moved node sees them are offline. I was planing to move all the VMs to this node. What happenes when I bring the other 3 nodes to the new location. Will they jsut resync and agree to what they see then is how they will update configuration or it is more tricky than that ?

I would remove any new storage before I try to resync the nodes in the new location. 3 of the nodes would have the same config and they sync as they are still in production in the old location.

Can anybody elaborate of what might happen ? I am open to reinstalling everything and rebuilding the cluster but if you think it is unnecessary it would save me at least 2-3 hours.

Thank you
 
Again I am moving our 4 node cluster to a new location. I though I could just move one node (node4) to the new location and slowly move the VMs over a period of two weeks to that node. Then move the remaining 3 nodes and they would resync.

I am worried about the resyncing, I see the from teh perspective of the one moved node the remaining nodes are unavailable and the VMs as the moved node sees them are offline. I was planing to move all the VMs to this node. What happenes when I bring the other 3 nodes to the new location. Will they jsut resync and agree to what they see then is how they will update configuration or it is more tricky than that ?

I would remove any new storage before I try to resync the nodes in the new location. 3 of the nodes would have the same config and they sync as they are still in production in the old location.

Can anybody elaborate of what might happen ? I am open to reinstalling everything and rebuilding the cluster but if you think it is unnecessary it would save me at least 2-3 hours.

Thank you
Hi Bruce,
I assume the internal networks in both locations don't see each other? (It's important, otherwise the method which has Klaus mentioned is better)
Further, you don't use shared/distributed storage and only local one? (If pve has trouble to access defined storage, you can have strange effects).

If you stop one node at the old location and start them in the new location, this node don't have quorum, because you need three running nodes in an 4-node-cluster.
E.G. /etc/pve is write protected and no VM start.
If you know, what you are doing, you can reach quorum with "pvecm expected 1" - after that, you can start VMs, which use local storage on this node.
To migrate VMs you must stop the VM on the old location, transfer the VM disk(s) (dd/rsync/scp/...) to the "new" host at the new location.
After that, you can move the VM-config on the "new" host from the old location to the new, like "mv /etc/pve/nodes/pve-3/qemu-server/123.conf /etc/pve/qemu-server/".
Then you can start the VM on the new location.

To "rejoin" the tree nodes from the old location:
Have an backup from /etc/pve from both locations.
If the three node don't held any VM (all VMs are transferred to the new location), you simply need to power on node for node.
If the first node came up (the second cluster node), they will join the cluster (which run with one vote only) and sync the content from /etc/pve - control the quorum with pvecm.
If all is fine, you can power on the second node (third cluster node) and after that, you can set the expected back to 3.
And least the last node.

Udo
 
hi udo,


We are using ceph but moving all local hard drives to move the ceph servers as well ahead of time. I already did the expected -1 and the separated node is back up and operational.


All seems to be working now in both locations. From the original location we will be moving 3 nodes at the same time that currently are quorate as they need 3 votes only.


I will be moving the MVs by simply backing them up to NFS from and restoring and powering up on one node while turning off at the other node - that part works great I have done that before.


At this point I think I will reinstall proxmox on 3 nodes join them into a cluster and move VMs from the one node working in the remote location having a 3 node cluster then I would reinstall the one node that was there and add it to the cluster as a 4th node. It might easier in the end and safer in case we run into any issues.


Thank you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!