Hi There,
We're currently running a VMware cluster with VSAN storage, we're running about 500 production VMs on it. We have a new Proxmox cluster with Ceph storage were we want to migrate to. I've interconnected both clusters with a 10Gbit network.
I am looking for a procedure to migrate over these VMs to ProxMox with as little downtime as possible.
I've tried the ovftool (Virtual-to-Virtual (V2V)) method described here
https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE?ref=blog.galt.me
Here i run the ovftool on one proxmox node, which is connecting to a vmware node to pull the data, after that i've run the qm importovf command to import it in proxmox. I've automated these steps with Ansible so i just can run a single playbook, enter the source vm name, and it will automatically stop the source VM, migrate it over to ProxMox, attach the networks again and boot it up.
This is working perfectly, but...
The only downside is that the whole proces is taking a lot of time where a VM has to be powered off. I've tested this on several VMs we have, variating in disk size (from 40GB to 1TB) and the overall speeds i am seeing for ovftool export is around 2GB per minute (over the 10Gbit network between a Proxmox and Vmware node). So a VM of 1TB will take up to 8.5 hours to finish the ovftool export job. After that the qm importovf command has to be executed to import it in ProxMox/Ceph which will also take a few hours to which results in a downtime of 10+ hours.
We're running a hosting business and not all VMs are running in a HA-cluster, so this kind of downtime is unacceptable.
Maybe we can optimize the above speeds/times with faster storage, or a faster network but i think this will still takes hours of downtime in all cases.
The total used space on the VMWare cluster is around 40TB, so with 2GB/minute.... this will take 340 hours..
Another approach i've thought about is:
- Setup a NAS server which can run qemu-img convert with fast storage, mount it both on VMWare and Proxmox via NFS
- Live migrate the VM on VMware to this NFS storage
- Shutdown the VM on VMware
- On proxmox: Run ovftool with the --nodisks flag, so it will only export the VMs configuration
- On the NAS server: run the qemu-img convert command, to convert the VMDK files to raw/qcow (this is locally on the NAS, because the VMDKs are there already, so should be fast???)
- On proxmox: Import the OVA file, attach the disks from the NFS share (which are on the NAS server and already converted)
- Boot up the VM on Proxmox
- When working properly, us the "Migrate Storage" disk action to migrate over the disk from the NFS share to Ceph.
I did not tried above approach yet, but i think this can work and can be automated with Ansible as well (as far as I can think right now)
Maybe there is another (fast) solution to do this? Does anyone have experience with this kind of migrations?
Thanks!
We're currently running a VMware cluster with VSAN storage, we're running about 500 production VMs on it. We have a new Proxmox cluster with Ceph storage were we want to migrate to. I've interconnected both clusters with a 10Gbit network.
I am looking for a procedure to migrate over these VMs to ProxMox with as little downtime as possible.
I've tried the ovftool (Virtual-to-Virtual (V2V)) method described here
https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE?ref=blog.galt.me
Here i run the ovftool on one proxmox node, which is connecting to a vmware node to pull the data, after that i've run the qm importovf command to import it in proxmox. I've automated these steps with Ansible so i just can run a single playbook, enter the source vm name, and it will automatically stop the source VM, migrate it over to ProxMox, attach the networks again and boot it up.
This is working perfectly, but...
The only downside is that the whole proces is taking a lot of time where a VM has to be powered off. I've tested this on several VMs we have, variating in disk size (from 40GB to 1TB) and the overall speeds i am seeing for ovftool export is around 2GB per minute (over the 10Gbit network between a Proxmox and Vmware node). So a VM of 1TB will take up to 8.5 hours to finish the ovftool export job. After that the qm importovf command has to be executed to import it in ProxMox/Ceph which will also take a few hours to which results in a downtime of 10+ hours.
We're running a hosting business and not all VMs are running in a HA-cluster, so this kind of downtime is unacceptable.
Maybe we can optimize the above speeds/times with faster storage, or a faster network but i think this will still takes hours of downtime in all cases.
The total used space on the VMWare cluster is around 40TB, so with 2GB/minute.... this will take 340 hours..
Another approach i've thought about is:
- Setup a NAS server which can run qemu-img convert with fast storage, mount it both on VMWare and Proxmox via NFS
- Live migrate the VM on VMware to this NFS storage
- Shutdown the VM on VMware
- On proxmox: Run ovftool with the --nodisks flag, so it will only export the VMs configuration
- On the NAS server: run the qemu-img convert command, to convert the VMDK files to raw/qcow (this is locally on the NAS, because the VMDKs are there already, so should be fast???)
- On proxmox: Import the OVA file, attach the disks from the NFS share (which are on the NAS server and already converted)
- Boot up the VM on Proxmox
- When working properly, us the "Migrate Storage" disk action to migrate over the disk from the NFS share to Ceph.
I did not tried above approach yet, but i think this can work and can be automated with Ansible as well (as far as I can think right now)
Maybe there is another (fast) solution to do this? Does anyone have experience with this kind of migrations?
Thanks!
Last edited: