New Import Wizard Available for Migrating VMware ESXi Based Virtual Machines

Thanks for this tool. And thanks to mram for the ESXi 6.5 soap setting workaround that enabled it to work in my VSphere 6.7U3 (ESXi 6.5U3) environment.

Here are my observations mixed with questions:
- My vmdk files on the ESXi hosts are thin provisioned disks. Even if I fstrim inside the guest and use vmkfstools -K to "Punch Holes" in the .vmdk file before running this import wizard, the importer seems to be copying and consuming the full provisioned space. This causes the import to take much longer. Could something be devised to skip the holes? Veeam Backup talks to VCenter and is able to only transfer the consumed size of thin-providioned volumes so maybe this could be an enhancement to the this import wizard?
- The resulting "disk" (on Ceph) did not have the discard option set. I double-checked on my next import (running now) and when preparing the import there did not seem to be an option to set that on the disk in the import UI. Would be a nice touch but not can always be set after import so not needed unless connected with my question above.
- My first import was slow. A 500GB thin disk VM took the importer 3.5 hours to complete across a 2x10Gbps bonded link (EDIT: I just realized that the admin interface is actually just 2x1Gbps). Some visibility into bottlenecks would be nice. Various speed reports are all over the map from what I have seen in this thread.(EDIT: Maybe I'm not the only one facepalming right now.)
- The VM came up without networking because my VM used "predictable network interface device naming" and I had a fixed IP set for an interface that had a different name after migration. A recovery boot and edit to /etc/network/interfaces solved it. Heads up for anyone encountering this.
- I'm pretty new to Proxmox so while I did see in the docs that I should remove open-vm-tools from my VM before moving it, I didn't immediately remember to install *and enable* qemu-guest-agent after import. Heads-up on this too.

Anyway, thank you! I have several dozen more VMs to move across and this wizard is very timely for me. That said, I'm also hoping the devs might comment on some of these pain points because if there are plans for improvements to thin disk migration in the near future, I'll just hold off for a bit to help test. :)
 
Last edited:
Hi,

Can someone help me?

There is no ESXi option :(
i updated the Node to latest version 8.1.10 and checked if the tool was installed.
"apt install pve-esxi-import-tools " ---> "pve-esxi-import-tools is already the newest version (0.6.0)."

But when i look under Storage and click the Add button there is no ESXI option?
did i miss anything?
View attachment 65512

Edit: i did also the steps on my cluster same thing no esxi option..
I have the same problem. Did you ever get a solution?
 
I have the same problem. Did you ever get a solution?
I have seen many posts on this thread suggesting that after installing the new tool, reloading the admin GUI in the browser or restarting the browser altogether caused the GUI to load with the new option available.
 
What needs to be done when you have white spaces in your Vmware VM names? The import tool does not like it very much.
 
Today I added new management interfaces to the VSwitch with 2x10Gbps uplinks on my ESXi hosts, replaced the PVE ESXi Storage to reflect the new ESXi Management IPs, and tried another couple VM imports. Sadly, the transfer was still much slower than I would expect and barely any faster than before. It took 3 hours and 20 minutes to transfer one 500GB VM.

My PVE cluster is all SATA SSD. The Ceph performance screen shows very intermittent and very slow writes during the process like almost nothing is happening the majority of the time so I'm a bit stumped.

For comparison I tried using rsync to copy a 500GB .vmdk file from one of the same ESXi hosts to a CephFS volume on one of the same PVE nodes. It took approximately 38 minutes.

Any thoughts? Ideas of things to try or places to look?
 
This is a great step for those looking to migrate away from VMWare. The big thing that is still missing is the ability to create new VMs from disk image (qcow / ovf). There are a LOT of virtual appliances that are distributed in these formats vs ISO installers. There is an open feature request here: https://bugzilla.proxmox.com/show_bug.cgi?id=2424

Thanks for all the hard work!
 
Today I added new management interfaces to the VSwitch with 2x10Gbps uplinks on my ESXi hosts, replaced the PVE ESXi Storage to reflect the new ESXi Management IPs, and tried another couple VM imports. Sadly, the transfer was still much slower than I would expect and barely any faster than before. It took 3 hours and 20 minutes to transfer one 500GB VM.

My PVE cluster is all SATA SSD. The Ceph performance screen shows very intermittent and very slow writes during the process like almost nothing is happening the majority of the time so I'm a bit stumped.

For comparison I tried using rsync to copy a 500GB .vmdk file from one of the same ESXi hosts to a CephFS volume on one of the same PVE nodes. It took approximately 38 minutes.

Any thoughts? Ideas of things to try or places to look?
Make sure there is no snapshots on the vm you are trying to import. That will easily take 4x or more times as long even if it's a tiny delta.

You could also rsync it over and then import the disks manually instead of using the tool and see how long that takes for comparison. Would help verify if the delay is in the transfer, or in the drive conversion.
 
Make sure there is no snapshots on the vm you are trying to import. That will easily take 4x or more times as long even if it's a tiny delta.

You could also rsync it over and then import the disks manually instead of using the tool and see how long that takes for comparison. Would help verify if the delay is in the transfer, or in the drive conversion.
Thanks for your thoughts. None of the VMs I'm migrating have snapshots so it's not that. FYI, VMWare does not have a lot of confidence in their snapshots. "Running a virtual machine on a snapshot for extended periods of time can cause instability and data loss." I am looking forward to being able to take snapshots and count on them with Proxmox + Ceph.

I have not tried the manual migration process yet. The file transfer would suggest the bottleneck is not network or disk on either end. I'm suspicious of the API stuff on the ESXi side. If there aren't dev comments on performance or a new release of the tool to try in the next day or two, then I will take some time to give that a try for comparison. It is what I thought I would have to do before this tool came along so we'll see. It may still be a better use of my time to use the tool, even if it is slower. My perpetually licensed VSphere environment is still working fine so I'm not in a desperate rush. I can migrate old stuff slowly while putting new stuff straight onto the new PVE environment.

Thanks again!
 
  • Like
Reactions: ajmorris74
Thanks for your thoughts. None of the VMs I'm migrating have snapshots so it's not that. FYI, VMWare does not have a lot of confidence in their snapshots. "Running a virtual machine on a snapshot for extended periods of time can cause instability and data loss." I am looking forward to being able to take snapshots and count on them with Proxmox + Ceph.

I have not tried the manual migration process yet. The file transfer would suggest the bottleneck is not network or disk on either end. I'm suspicious of the API stuff on the ESXi side. If there aren't dev comments on performance or a new release of the tool to try in the next day or two, then I will take some time to give that a try for comparison. It is what I thought I would have to do before this tool came along so we'll see. It may still be a better use of my time to use the tool, even if it is slower. My perpetually licensed VSphere environment is still working fine so I'm not in a desperate rush. I can migrate old stuff slowly while putting new stuff straight onto the new PVE environment.

Thanks again!
I had good success importing the disks over sshfs. Not many steps to take.
- Enable SSH on VMWare Host
- Install sshfs on Proxmox side => apt update && apt install sshfs
- Mount storage => mkdir /mnt/sshfs && sshfs -o allow_other,default_permissions root@[IP-OF-VMHOST]:/vmfs/volumes/[STORAGENAME] /mnt/sshfs/
- qm disk import [PROXMOX-VM-ID] [GUEST].vmdk [PROXMOX-DATASTORE]
 
I had good success importing the disks over sshfs. Not many steps to take.
- Enable SSH on VMWare Host
- Install sshfs on Proxmox side => apt update && apt install sshfs
- Mount storage => mkdir /mnt/sshfs && sshfs -o allow_other,default_permissions root@[IP-OF-VMHOST]:/vmfs/volumes/[STORAGENAME] /mnt/sshfs/
- qm disk import [PROXMOX-VM-ID] [GUEST].vmdk [PROXMOX-DATASTORE]
Thanks for sharing! I will test it in my lab.
 
I had good success importing the disks over sshfs. Not many steps to take.
- Enable SSH on VMWare Host
- Install sshfs on Proxmox side => apt update && apt install sshfs
- Mount storage => mkdir /mnt/sshfs && sshfs -o allow_other,default_permissions root@[IP-OF-VMHOST]:/vmfs/volumes/[STORAGENAME] /mnt/sshfs/
- qm disk import [PROXMOX-VM-ID] [GUEST].vmdk [PROXMOX-DATASTORE]
sshfs mount works well, and for larger disks it can be worth doing a live migration to minimize downtime.
After mounting, edit /etc/pve/storage.cfg and add:
dir: STORAGENAME (don't make it the same as other storage, so sshfsVOL or something)
path /mnt/sshfs
content iso,vztmpl,images
create a vm with 1gb disks on the sshfsVOL to import in
power down the source vm
then cp over and edit the small vmdk as per the wiki to point to the relative path. (The raw disks stay where they are)
power on the vm in proxmox, make things are correct, adjust drivers if needed, etc... and then you can migrate the disk(s) live.
 
I am using this live-import feature right now.
Awesome.

But there issue
restore-scsi0: transferred 8.8 GiB of 16.0 GiB (54.69%) in 12m 40s
restore-scsi0: stream-job finished
restore-drive jobs finished successfully, removing all tracking block devices
An error occurred during live-restore: VM 109 qmp command 'blockdev-del' failed - Node 'drive-scsi0-restore' is busy: node is used as backing hd of '#block296'

TASK ERROR: live-restore failed
I am seeing the same thing, so I can't accomplish a live import from any of my ESXi servers. If I stop the VM first I have no issues. Are we missing something?
 
I am seeing the same thing, so I can't accomplish a live import from any of my ESXi servers. If I stop the VM first I have no issues. Are we missing something?

I think you still have to stop the VM first for a live import. The difference is the live import will automatically power on the VM right away soon after it starts so you don't have to wait for the transfer to finish. So it's mostly live, but not 100%....
 
  • Like
Reactions: jholzer
Check out post #16 here. In order to avoid the I/O errors, we had to:

Before editing your config file, I would save a copy. Just run cp /etc/vmware/hostd/config.xml /etc/vmware/hostd/config.xml.old

We added <soap> below the initial comments in the xml file (designated by <!-- comment -->)
  • SSH into your vmware host
  • run vi /etc/vmware/hostd/config.xml
  • scroll to where you want to add the soap command
  • press 'i' to edit using vi
  • Add the code below to the xml file:
  • XML:
    <soap>
        <sessionTimeout>0</sessionTimeout>
        <maxSessionCount>0</maxSessionCount>
    </soap>
  • Press esc to exit edit mode in vi
  • Save the file and exit back to the terminal using :wq! and pressing enter
  • Restart hostd by running /etc/init.d/hostd restart
All of this is compiled from other users' comments. I just put it into a step by step format.

This works for vmware 6.5.
Still not working for me after adding the code to the config.xml file and restarting hostd service. ESXi 6.5.0 Update 3 (Build 13932383). As usual, it fails after ~8 minutes.

Does it matter where the code is added in the XML file?

Error:
Code:
transferred 10.8 GiB of 60.0 GiB (18.05%)
qemu-img: error while reading at byte 11911820800: Input/output error
TASK ERROR: unable to create VM 100 - cannot import from 'esxi:ha-datacenter/datastore/esxi/nvm.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -t none -f vmdk -O raw /run/pve/import/esxi/esxi-host/mnt/ha-datacenter/datastore/vm.vmdk zeroinit:/dev/zvol/rpool/data/vm-100-disk-0' failed: exit code 1
 
Last edited:
Awesome! exactly what a friend of mine was looking for a few days go but had to go through manual CLI process.
 
Still not working for me after adding the code to the config.xml file and restarting hostd service. ESXi 6.5.0 Update 3 (Build 13932383). As usual, it fails after ~8 minutes.

Does it matter where the code is added in the XML file?

Error:
Code:
transferred 10.8 GiB of 60.0 GiB (18.05%)
qemu-img: error while reading at byte 11911820800: Input/output error
TASK ERROR: unable to create VM 100 - cannot import from 'esxi:ha-datacenter/datastore/esxi/nvm.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -t none -f vmdk -O raw /run/pve/import/esxi/esxi-host/mnt/ha-datacenter/datastore/vm.vmdk zeroinit:/dev/zvol/rpool/data/vm-100-disk-0' failed: exit code 1
Reboot the ESXi host. I had better luck rebooting vs just restarting the services.
 
Still not working for me after adding the code to the config.xml file and restarting hostd service. ESXi 6.5.0 Update 3 (Build 13932383). As usual, it fails after ~8 minutes.

Does it matter where the code is added in the XML file?

Error:
Code:
transferred 10.8 GiB of 60.0 GiB (18.05%)
qemu-img: error while reading at byte 11911820800: Input/output error
TASK ERROR: unable to create VM 100 - cannot import from 'esxi:ha-datacenter/datastore/esxi/nvm.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -t none -f vmdk -O raw /run/pve/import/esxi/esxi-host/mnt/ha-datacenter/datastore/vm.vmdk zeroinit:/dev/zvol/rpool/data/vm-100-disk-0' failed: exit code 1
I wouldn't nest <soap> anywhere. For us, we put it above the <log> section. However, I think anywhere in the root of <config> should be fine.

For example:
XML:
<config>
<!--  Host agent configuration file for ESX/ESXi -->


   <!-- the version of this config file -->
   <version>6.5.0.0</version>


   <!-- working directory  -->
   <workingDir>/var/log/vmware/</workingDir>


   <!-- location to examine for configuration files that are needed -->
   <defaultConfigPath> /etc/vmware/ </defaultConfigPath>


   <!-- stdout for hostd process  -->
   <!-- <stdoutFile>/var/log/vmware/hostd-stdout.txt</stdoutFile> --> 


   <!-- stderr for hostd process  -->
   <!-- <stderrFile>/var/log/vmware/hostd-stderr.txt</stderrFile> --> 


   <!-- Memory death point for hostd -->
   <hostdStopMemInMB> 316 </hostdStopMemInMB>


   <!-- Memory watermark for hostd -->
   <hostdWarnMemInMB> 256 </hostdWarnMemInMB>


   <!-- hostd memory per VM estimate -->
   <hostdPerVMMemInKB> 1024 </hostdPerVMMemInKB>


   <!-- hostd base memory estimate -->
   <hostdBaseMemInMB> 217 </hostdBaseMemInMB>


   <!-- hostd min num of fds -->
   <!-- Override by vmacore/threadPool/MaxFdsPerThread  -->
   <hostdMinFds> 3072 </hostdMinFds>


   <!-- hostd absolute max num of fds -->
   <hostdMaxFds> 4096 </hostdMaxFds>


   <!-- hostd mmap threshold in kilo bytes -->
   <hostdMmapThreshold> 32 </hostdMmapThreshold>


   <!-- Maximum number of hostd threads -->
   <hostdMaxThreads> 128 </hostdMaxThreads>


   <!-- Mode in which hostd runs: defines product type -->
   <hostdMode> esx </hostdMode>


   <!-- Frequency of memory checker -->
   <!-- Disabled pending resolution of stack size issue -->
   <!-- memoryCheckerTimeInSecs> 0 </memoryCheckerTimeInSecs -->


   <!-- Time duration for which sync primitive is allowed to be locked
        before logging a warning -->
   <!-- lockDurationTresholdInMillis> 0 </lockDurationTresholdInMillis -->


   <!-- Code coverage feature -->
   <!--  <coverage>
           <enabled>false</enabled>
           <dataDir>/tmp</dataDir>
         </coverage> -->


[B]   <!-- Added SOAP for migration to Proxmox -->
   <soap>
      <sessionTimeout>0</sessionTimeout>
      <maxSessionCount>0</maxSessionCount>
   </soap>[/B]


   <log>
      <!-- controls where rolling log files are stored -->
      <directory>/var/log/vmware/</directory>
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!