ESXi VM migration method

Pepega597

New Member
May 14, 2025
4
0
1
Hello everyone, I am a beginner with PVE, apologize if I say something wrong.
I want to migrate our VMs from the original ESXi to PVE, using the same NFS storage (NetApp).
In this situation, is there any migration method that can minimize downtime?
Since our current network connections are all 1Gbps, backup and restore methods are ruled out for now.
I've tried the built-in ESXi import tool in PVE, but the speed is very slow — a 100GB VM takes almost an hour.
I have also tried using qm importdisk to convert disks in to PVE, but the speed is also very slow.
Maybe my understanding or some settings are incorrect, but using the same storage for internal copying should be very fast, isn't it?
Thanks in advanced for any help.
 
Last edited:
Hey,

did you take a look at [1]? It covers quite a lot options. Generally with 1Gbit/s networking just transferring 100GB of data will take about 15mins, and this is if all of the available BW is used. There is no alternative to physically moving disks if you want to not be limited to 1Gbit/s.

[1] https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE
 
  • Like
Reactions: Pepega597
A few other things to keep in mind:
  • The disk isn’t simply being transferred - it’s being transcoded from VMDK to QCOW during the process, which adds overhead.
  • Are you storing the resulting image on the same storage as the source? If so, that means simultaneous reads and writes are hitting the same storage and network link, potentially saturating it.
  • Since your network is only 1 Gbit, consider the type of disks involved. Are they from the same generation - e.g., both HDDs? If so, expect higher latency, especially under concurrent I/O load.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Pepega597
Hey,

did you take a look at [1]? It covers quite a lot options. Generally with 1Gbit/s networking just transferring 100GB of data will take about 15mins, and this is if all of the available BW is used. There is no alternative to physically moving disks if you want to not be limited to 1Gbit/s.

[1] https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE
Thank you for the information provided. I have tried the method described in the documentation: Attach Disk & Move Disk (minimal downtime).
Currently, it is possible to directly mount the VMDK disk to PVE and use it, but I have noticed that the VM’s performance is poor.
Would you still recommend converting the disk format to raw or qcow2?
 
A few other things to keep in mind:
  • The disk isn’t simply being transferred - it’s being transcoded from VMDK to QCOW during the process, which adds overhead.
  • Are you storing the resulting image on the same storage as the source? If so, that means simultaneous reads and writes are hitting the same storage and network link, potentially saturating it.
  • Since your network is only 1 Gbit, consider the type of disks involved. Are they from the same generation - e.g., both HDDs? If so, expect higher latency, especially under concurrent I/O load.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you for the reminder.
Currently, all our VMs on ESXi are stored in a NetApp NFS datastore.
My current plan is to mount the same NFS storage to PVE, which is why I’m wondering if there’s a way to quickly migrate or mount the VMs for use.
It seems that directly mounting a VMDK to PVE may cause performance issues. However, if I use qm importdisk to convert the disk format, it would require a long downtime.
 
VMDK is a proprietary format. The open-source community has made efforts to enable migration from VMDK, but there’s no incentive to make it perform as efficiently in non-ESXi environments, it’s likely not possible to match native ESXi performance.

I understand that importdisk performance feels slow in your case, but if you’re constrained by storage throughput or CPU resources, there’s little that can be done to speed it up without addressing those limits. You could try other tools, Veeam comes to mind. Both Veeam and the Proxmox ESXi importer may be able to perform online imports.

While it’s reasonable to expect faster import performance, it ultimately comes down to identifying where the bottleneck lies and applying more resources. whether that’s faster storage, more CPU, or improved network capacity.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
VMDK is a proprietary format. The open-source community has made efforts to enable migration from VMDK, but there’s no incentive to make it perform as efficiently in non-ESXi environments, it’s likely not possible to match native ESXi performance.

I understand that importdisk performance feels slow in your case, but if you’re constrained by storage throughput or CPU resources, there’s little that can be done to speed it up without addressing those limits. You could try other tools, Veeam comes to mind. Both Veeam and the Proxmox ESXi importer may be able to perform online imports.

While it’s reasonable to expect faster import performance, it ultimately comes down to identifying where the bottleneck lies and applying more resources. whether that’s faster storage, more CPU, or improved network capacity.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you for taking the time to reply to my question.

The method I’m currently using is: mount the NFS storage used by ESXi to PVE, create an empty VM, and add an empty virtual disk. Then, I copy the VM’s VMDK file and overwrite the newly created disk, modifying the path in it that points to the flat.vmdk so it matches the original disk location in ESXi. Using this approach, I can start the VM in PVE within a very short time.

After that, I use PVE’s disk migration feature to move the original disk to a new storage location, while also converting the disk format to QCOW2. This process can be done without shutting down the VM, although the migration causes performance degradation during the process.

This method is very practical for our environment where there aren’t many VMs but services cannot be interrupted for long periods of time. However, I’m not sure if it might cause any potential issues later on.