Migration between Proxmox Servers

rg2

Active Member
Jul 18, 2014
27
4
43
Hello guys,

Some time ago I bought a Dell R720 and installed Proxmox 3.1 to host VMs. Then I bought another Dell R720 and installed another Proxmox 3.1 server.

Now I bought 2 more Dell R720 and build a Proxmox 3.2 Cluster. The VMs there are running with HA.

I want to migrate all VMs from the first two servers to the new cluster. What is the best way do to that? How would you do that to have the least downtime possible?

Thaks for any tips!

Rafael
 
The best way would be to upgrade your all cluster to 3.2 and join the new proxmox host to your old cluster. After joining the new proxmox host to the old cluster you can then migrate all VM's from the old hosts to the new hosts. When all VM's are migrated to the new hosts you simple remove the old hosts from the cluster.
 
Thank you for your reply.

The best way would be to upgrade your all cluster to 3.2 and join the new proxmox host to your old cluster.

Done.

After joining the new proxmox host to the old cluster you can then migrate all VM's from the old hosts to the new hosts.

I doesn't let me add a host that already have VMs running to another Cluster.
 
Last edited:
So my problem is still unsolved... What is the best way to migrate my VMs from a single server (proxmox 3.2) to another cluster (proxmox 3.2), when both have VMs running.

Backup and restore? From my experience, that takes a looooong time...
 
So my problem is still unsolved... What is the best way to migrate my VMs from a single server (proxmox 3.2) to another cluster (proxmox 3.2), when both have VMs running.

Backup and restore? From my experience, that takes a looooong time...


Backup/Restore is one viable options, yes. Since you have VM's on both, it wont let you add to the cluster since VMID's and such could be the same, I'd guess.

If you absolutely had to, you could SCP over the disc files from the old machine to the new machines, and then make a new VM with those discs as their only disc, but that seems dirty to me.

backup/restore would be my bet. Will take a while, but so would having to make them again.
 
it could a lot depend on the setup of the vms on the joining node, I suppose, but it's not supported, and quite risky if you are not very careful.
it's not without downtime, but it has a very minimal downtime. Never tried though, not in this specific way, just something similar.

if vms on the joining node had only network disks on a "nfs storage" (or you live moved disks to this setup), you could:
- create corresponding similar "target vms" on the target cluster, with "dummy network disks" (even 1gb), keep them stopped until later.
- link the "same nfs storage" also on the target cluster
- manually edit the config of the "target vms" to load their disks from the "same nfs storage" instead of from the "dummy network disks" . keep them stopped until later.
- stop the original vms on the joining node.
- start the "target vms" on the target cluster. Their disk should be reachable, the vmid is the same, should work.
- remove the "nfs storage" from the joining node
- remove the original vms configs on the joining node (manually perhaps is better since their storage is just gone frome the node)
- remove the "dummy network disks", now unused, if still there (ie: if you didn't replace them with the vm real disks using the same file name)
- join the node, now empty, to the target cluster
- live migrate back the vms to the newly joined node, and live move disks to LVM (or other) if needed, if you wish

should work, imho but please review the process, and test first with destroyable sample vms.

Marco
 
Last edited:
Backup/Restore is one viable options, yes. Since you have VM's on both, it wont let you add to the cluster since VMID's and such could be the same, I'd guess.

If you absolutely had to, you could SCP over the disc files from the old machine to the new machines, and then make a new VM with those discs as their only disc, but that seems dirty to me.

backup/restore would be my bet. Will take a while, but so would having to make them again.

I actually changed the VMID's but Proxmox won't let me add to the cluster. I read somewhere that if I forced it I would have problems.

Copying the discs might be faster then backup/restore... I'll have to check that.

My problem is that all this VMs are production servers. Some of them I can afford to have a few hours downtime, but some (like e-mail servers) I can't do that.

it could a lot depend on the setup of the vms on the joining node, I suppose, but it's not supported, and quite risky if you are not very careful.
it's not without downtime, but it has a very minimal downtime. Never tried though, not in this specific way, just something similar.

if vms on the joining node had only network disks on a "nfs storage" (or you live moved disks to this setup), you could:
- create corresponding similar "target vms" on the target cluster, with "dummy network disks" (even 1gb), keep them stopped until later.
- link the "same nfs storage" also on the target cluster
- manually edit the config of the "target vms" to load their disks from the "same nfs storage" instead of from the "dummy network disks" . keep them stopped until later.
- stop the original vms on the joining node.
- start the "target vms" on the target cluster. Their disk should be reachable, the vmid is the same, should work.
- remove the "nfs storage" from the joining node
- remove the original vms configs on the joining node (manually perhaps is better since their storage is just gone frome the node)
- remove the "dummy network disks", now unused, if still there (ie: if you didn't replace them with the vm real disks using the same file name)
- join the node, now empty, to the target cluster
- live migrate back the vms to the newly joined node, and live move disks to LVM (or other) if needed, if you wish

should work, imho but please review the process, and test first with destroyable sample vms.

Marco

Thank you for your time Marco, but I am not using storage in this configuration.

I have an idea. Tell me what you think...

I will restore a backup on the new cluster while still using the original VM on its current host.

After the restore is complete, I will stop any service running on them and run robocopy (Windows Servers) to update any file changed during the restore, so I will have the restored VM data updated with the data of the original VM (of course being carefull with both VMs IP addresses).

After robocopy updates all files, I resume services on the new VM on the cluster.
 
After the restore is complete, I will stop any service running on them and run robocopy (Windows Servers) to update any file changed during the restore, so I will have the restored VM data updated with the data of the original VM (of course being carefull with both VMs IP addresses).
After robocopy updates all files, I resume services on the new VM on the cluster.

you will have two ***identical*** windows servers on the network... SID, role, hostname, ipaddress, mac address, everything... I think you could have trouble with this... or you could change them but I feel there will be some impact on windows resources...

just my 2c. I would go through my suggested "path". You need external storage, but I feel it's simpler and faster, overall.

Marco