New Import Wizard Available for Migrating VMware ESXi Based Virtual Machines

I would look into using vmware's own ovf tool which will let you export the VMs directly out of vmware and then you can use qemu-img convert to convert them over. I've done this a few times way before the native tool was added to ProxMox.

You run the tool directly on PVE which will connect to the vmware host server. You can run this on any Debian based VM as well. Just make sure you have plenty of storage to hold the vmware vms till it's converted over.

https://developer.broadcom.com/tools/open-virtualization-format-ovf-tool/latest
Nice.
Other than convienence, do you know if there is an advantage to one versus the other? I am assuming no, and it's likely the same / similar code underneath the hood.
 
Note that you can live-import VMs. This means that you can stop the VM on the ESXi source and then immediately start it on the Proxmox VE target with the disk data required for booting then being fetched on demand. The feature works similarly to live-restore that we provide for PBS backups since a while.

Live-import naturally requires a very stable and fast network between the PVE and ESXi host and taking some care with device driver/model selection, but it can reduce the downtime on import significantly.
Hi,

I am currently trying this feature, but I have not been able to get a convenient result so far.

The VMware source VM I am trying to migrate is a dual disk one : 50GB for the OS (Ubuntu), 2.5TB for data.

On my first live-import attempt, I had both disks selected.

As expected, disk cloning started with both devices in parallel.

But when the OS disk was 100% cloned, nothing special happened and data disk cloning kept on running, with source and destination VMs still powered off.

So I canceled this task and started over, with only OS disk selected.

This time, after a full clone of this single device, the new VM started automatically.

Terms like "streaming" are used, but I have just seen a nicely working cloning procedure, just like the regular one (of course, I have double-checked and live import was selected) and the related downtime.

Am I missing something ? Any requirements or caveats ?

Regards
 
Terms like "streaming" are used, but I have just seen a nicely working cloning procedure, just like the regular one (of course, I have double-checked and live import was selected) and the related downtime.

Am I missing something ? Any requirements or caveats ?
Due to the session timeouts with ESXi, I would not even bother with a Live Import.

Migrate the VM. Patch any driver needs. Set it to start on boot or HA. Setup your backups. Then move on to the next VM.

Added - Seriously, do your cleanup right after migration. Install the VIRTIO drivers for DISKs and NICs. Reboot as needed to migrate to those drivers using IDE/SATA first if needed, fix your static IPs on the new NICs. Then Uninstall the VMWare Tools. Live migration is nice in theory, but you should optimize the VMs for optimal performance before putting them back into service. If time allows, get a first cold backup done too.
 
Last edited:
In fact I have realized my browser was a bit messed up, and I could not see the console in the destination VM.

After a restart, it's much better !

So the VM is indeed powered on and running, with the disks being streamed :)

Most of the LInux VMs we have to migrate are recent enough to provide stock VirtIO support and changing in the migration wizard works fine.

The other important point I first forgot : NIC devices have a different name in KVM, so configuration files need to be tweaked accordingly.
Maybe this could be a a new feature the wizard could implement too.

But of course, let's see if there is any timeout problem during the coming hours !
 
The other important point I first forgot : NIC devices have a different name in KVM, so configuration files need to be tweaked accordingly.
Maybe this could be a a new feature the wizard could implement too.
I simply solve this by renaming the network connection in advance.
Here is my example:
Code:
nano /etc/systemd/network/10-persistent-net.link

[Match] 
MACAddress=01:23:45:67:89:ab 
[Link] 
Name=lan0
 
  • Like
Reactions: mram
Hello,
I'm importing from a 3 host vmware ESX 7.0.3 cluster to 3 Proxmox hosts with CEPH.
The import is working but is very slow even if source and destination host has 10G networking.

I've checked network speed with iperf and speed is ok.
iperf.png

Importing speed is something like 100/120 Mb/s.
import_speed.PNG
Is that an expected import speed for a 10G network?

CPU on Proxmox hosts are Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz (2 Sockets) and on vmware side I have Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz (2 Sockets).

Thanks for your help.
 
Hello,
I'm importing from a 3 host vmware ESX 7.0.3 cluster to 3 Proxmox hosts with CEPH.
The import is working but is very slow even if source and destination host has 10G networking.

I've checked network speed with iperf and speed is ok.
View attachment 74322

Importing speed is something like 100/120 Mb/s.
View attachment 74323
Is that an expected import speed for a 10G network?

CPU on Proxmox hosts are Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz (2 Sockets) and on vmware side I have Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz (2 Sockets).

Thanks for your help.
Make sure you import from the host, not vCenter.
 
And also that you don't have any snapshots on the VM, but I believe I've had these speeds before as well, but for us we could work with it just fine. (and for the two servers that wouldn't work with that speed, we had other methods)
 
And also that you don't have any snapshots on the VM, but I believe I've had these speeds before as well, but for us we could work with it just fine. (and for the two servers that wouldn't work with that speed, we had other methods)
Yes, there's no snapshot on the vm. Previously, with one snapshot speed was 70/80 Mb/s
 
Hello,
I'm importing from a 3 host vmware ESX 7.0.3 cluster to 3 Proxmox hosts with CEPH.
The import is working but is very slow even if source and destination host has 10G networking.

I've checked network speed with iperf and speed is ok.
View attachment 74322

Importing speed is something like 100/120 Mb/s.
View attachment 74323
Is that an expected import speed for a 10G network?

CPU on Proxmox hosts are Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz (2 Sockets) and on vmware side I have Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz (2 Sockets).

Thanks for your help.
What does your Ceph network and disk setup look like?
It may be due to the Ceph sizing.

Also, never migrate VMs with existing snapshots. I have seen inconsistent migrations a few times and with snapshots there are much more frequent crashes.
 
What does your Ceph network and disk setup look like?
It may be due to the Ceph sizing.

Also, never migrate VMs with existing snapshots. I have seen inconsistent migrations a few times and with snapshots there are much more frequent crashes.
For Ceph network I'm using 2 Mellanox 100G network cards.
Single pool with 12 x 6 Tb nvme per server, so a total of 36 disks.

If I run 2 conversion at the same time network usage is exactly x2. It seams that the conversion binary is single thread... :confused:
 
For Ceph network I'm using 2 Mellanox 100G network cards.
Single pool with 12 x 6 Tb nvme per server, so a total of 36 disks.

If I run 2 conversion at the same time network usage is exactly x2. It seams that the conversion binary is single thread... :confused:
The conversion uses the VMware VDDK, which is intended for backups directly via the management interface of a host.
The traffic is always limited to 70% of the performance of the management interface and of course single threaded in order not to affect the management interface of the ESX too much.
These limitations are intentional on the part of VMware, which is why I have been migrating using the NFS datastore method for years. Unfortunately there is no wizard that suggests the VM configuration, but I migrate with minimal downtime and of course with significantly more throughput.
 
The conversion uses the VMware VDDK, which is intended for backups directly via the management interface of a host.
The traffic is always limited to 70% of the performance of the management interface and of course single threaded in order not to affect the management interface of the ESX too much.
These limitations are intentional on the part of VMware, which is why I have been migrating using the NFS datastore method for years. Unfortunately there is no wizard that suggests the VM configuration, but I migrate with minimal downtime and of course with significantly more throughput.
Thank you for the explanation.
So, using the GUI wizard, conversion speed is very limited.

In your experience, using the qemu-img to convert vmdk to raw from a NFS datastore will use full 10G network speed or is anyhow limited?
 
This method usually limits the network or the disks of the NFS share used. Migrating the VMDKs to RAW is possible during operation and the disk or the network is also the limiting factor here.
 
  • Like
Reactions: diacomi
vmWare ESXi 7.0.3 to Proxmox 8.2.2 worked after confirming the snapshots were deleted and using the config
Config.HostAgent.vmacore.soap.maxSessionCount = 0

Thanks!
 
Hy, just my 2 cents.

Importing 3 VM from ESXi6.5 : Windows XP, Windows 7 and Linux with encrypted disk, no problem everything worked as expected. (mergeide for XP but it's not a problem with this feature).

Thanks a lot for this feature, i'll move from esx6.5 to proxmox 8.2 soon for my own test server.
 
has someone tried or experiences with vsan? is this a feature that will come in future releases maybe?

i know the statement is *Importing a VM with disks backed by a VMware vSAN storage does not work

Thanks, BR
 
has someone tried or experiences with vsan? is this a feature that will come in future releases maybe?

i know the statement is *Importing a VM with disks backed by a VMware vSAN storage does not work

Thanks, BR
VSAN and storages with VVOLs will never be supported. Both systems do not work with virtual disk files. However, the VMware API that is used can only provide disk files.
It is best to migrate to a VMFS or NFS storage and then it will work.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!