New Import Wizard Available for Migrating VMware ESXi Based Virtual Machines

I started importing with the default import config and the process was extremely slow, so that I had to cancel the process.
I was at 5% of 64 GB after 2 hours !.


What can I do ?. Anything more to do before migration ?
Make sure there is no snapshots on the vm you are trying to import. (surprisingly extremely important for performance)
Make sure the management interfaces on both the vmware server and proxmox server are both good. (ie: not 1gb unless all are 1gb anyways).
Have the management interfaces on the same subnet so no routers slowing the transfer.
Test network connectivity. From the proxmox host:
Code:
ping vmwarehost 15000 -c 100 -q -A


1gb to 1gb on my homelab looks like:
PING 10.254.201.23 (10.254.201.23) 15000(15028) bytes of data.

--- 10.254.201.23 ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99ms
rtt min/avg/max/mdev = 0.527/0.589/0.910/0.070 ms, ipg/ewma 1.001/0.575 ms

A 10gb looks more like:
PING vm8 (10.0.2.208) 15000(15028) bytes of data.

--- vm8.ces.cvnt.net ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 141ms
rtt min/avg/max/mdev = 0.098/0.155/0.237/0.030 ms, ipg/ewma 1.424/0.152 ms

The ping specifically has 15,000 bytes to so that it will ensure things like MTU are set correctly on both as it's bigger than both normal and typical jumbo frames to force fragmentation.
 
  • Like
Reactions: Futurematic
Make sure there is no snapshots on the vm you are trying to import. (surprisingly extremely important for performance)
Make sure the management interfaces on both the vmware server and proxmox server are both good. (ie: not 1gb unless all are 1gb anyways).
Have the management interfaces on the same subnet so no routers slowing the transfer.
Test network connectivity. From the proxmox host:
Code:
ping vmwarehost 15000 -c 100 -q -A


1gb to 1gb on my homelab looks like:
PING 10.254.201.23 (10.254.201.23) 15000(15028) bytes of data.

--- 10.254.201.23 ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99ms
rtt min/avg/max/mdev = 0.527/0.589/0.910/0.070 ms, ipg/ewma 1.001/0.575 ms

A 10gb looks more like:
PING vm8 (10.0.2.208) 15000(15028) bytes of data.

--- vm8.ces.cvnt.net ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 141ms
rtt min/avg/max/mdev = 0.098/0.155/0.237/0.030 ms, ipg/ewma 1.424/0.152 ms

The ping specifically has 15,000 bytes to so that it will ensure things like MTU are set correctly on both as it's bigger than both normal and typical jumbo frames to force fragmentation.
Thanks a lot for the reply. It was the snapshot which seemed to be the problem. After deleting the snapshot migration was without problems !.
 
This is very nice tool but works in 50% of cases, latest error I got from vmware 6.5.0 Update 3 (Build 18678235) is can not stat vmdk file, this happens immidiate at the beginning of import process. I have imported other vm guest from the same host, it is question of the vm guest.

Second error is vmdk and guest names with spaces, this is not the esxi import tool favourite, it fails with error.
 
Last edited:
Is there any way to increase the timeout when adding an ESXi via vCenter?

I know that vCenter isn't recomended, but ESXi is an OVH's HPC cluster and we can't get access to the ESXi hosts.

When adding with webUI it times out at 30 seconds. Adding the storage via cli with pvesm add esxi asd --server vcenterHost --username USERNAME --password it does timeout at 60 seconds. Due to the amount of VMs in vCenter, it takes around 4 minutes to return the full list of VMs. tcpdump does show traffic for that much time.
 
  • Like
Reactions: jlauro
Same problem here. We have 4 ESXi (7.0.3) hosting around 150 VMs. Impossible to connect to the vSphere instance because of a timeout, but it is successful when accessing hypervisors directly.

But import process fails immediately, I am investigating on that.
 
Same problem here. We have 4 ESXi (7.0.3) hosting around 150 VMs. Impossible to connect to the vSphere instance because of a timeout, but it is successful when accessing hypervisors directly.

But import process fails immediately, I am investigating on that.
Do you have set "Config.HostAgent.vmacore.soap.maxSessionCount" on the Host to "0"?
 
Thanks for your reply.

Not yet, as I don't think I have any timeout problem so far.

I can get the list of the VMs and when I try to import one, I get an error like this one, within half a second (so a default 30 minutes timeout should not be a concern) :

Create full clone of drive (migration-esx-A:ha-datacenter/i:SC5020-C201-1:SCVOL-VM-04/BORE1/BORE1.vmdk)
Formatting '/var/lib/vz/images/102/vm-102-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=161061273600 lazy_refcounts=off refcount_bits=16
transferred 0.0 B of 150.0 GiB (0.00%)
qemu-img: error while reading at byte 0: Input/output error
TASK ERROR: unable to create VM 102 - cannot import from 'migration-esx-A:ha-datacenter/i:SC5020-C201-1:SCVOL-VM-04/BORE1/BORE1.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O qcow2 /run/pve/import/esxi/migration-esx-A/mnt/ha-datacenter/i:SC5020-C201-1:SCVOL-VM-04/BORE1/BORE1.vmdk zeroinit:/var/lib/vz/images/102/vm-102-disk-0.qcow2' failed: exit code 1

And it's the same scenario whether I choose a CEPH destination, a local one, a RAW disk or whatever.

But I don't know how to investigate further, the command does not seem to allow more verbosity and the syslog contains the very same informations.

So maybe there is something useful on the ESXi side, but I have not been able to find it so far.

Regards
 
Please set this option, the problem are the number of connections.
Do you have delete all Snapshots and wait for Power Down the source VM?
 
In fact I did not pay attention the VM was powered on again. As the error message is not giving any clue, I would have missed this hypothesis ! Thanks
 
I'm testing migrations to a 3 host PVE 8.2 cluster from a 3 host ESXi 6.7 cluster with a FC connected PURE storage array.
I'm finding a can't migrate systems whose disks are on the PURE's datastore:
qemu-img: Could not open '/run/pve/import/esxi/lsgprodesx03/mnt/ha-datacenter/Pure-vvol/rfc4122.1c03000d-3acc-4651-9190-1b12ace04994/lsgnetbox.lstaff.com.vmdk': Could not open '/run/pve/import/esxi/lsgprodesx03/mnt/ha-datacenter/Pure-vvol/rfc4122.1c03000d-3acc-4651-9190-1b12ace04994/vvol://b69dd815831433fb-a62aeb101bea0b52/rfc4122.b77d81f3-3fa3-4945-aef8-9a9cc60b0633': No such file or directory
TASK ERROR: unable to create VM 104 - cannot import from 'lsgprodesx03:ha-datacenter/Pure-vvol/rfc4122.1c03000d-3acc-4651-9190-1b12ace04994/lsgnetbox.lstaff.com.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O qcow2 /run/pve/import/esxi/lsgprodesx03/mnt/ha-datacenter/Pure-vvol/rfc4122.1c03000d-3acc-4651-9190-1b12ace04994/lsgnetbox.lstaff.com.vmdk zeroinit:/mnt/pve/datastore/images/104/vm-104-disk-0.qcow2' failed: exit code 1

If I migrate the storage from the Pure Array to the local datastore, I can migrate successfully. Is this is a known issue, or am I doing something wrong?
 
I'm testing migrations to a 3 host PVE 8.2 cluster from a 3 host ESXi 6.7 cluster with a FC connected PURE storage array.
I'm finding a can't migrate systems whose disks are on the PURE's datastore:
qemu-img: Could not open '/run/pve/import/esxi/lsgprodesx03/mnt/ha-datacenter/Pure-vvol/rfc4122.1c03000d-3acc-4651-9190-1b12ace04994/lsgnetbox.lstaff.com.vmdk': Could not open '/run/pve/import/esxi/lsgprodesx03/mnt/ha-datacenter/Pure-vvol/rfc4122.1c03000d-3acc-4651-9190-1b12ace04994/vvol://b69dd815831433fb-a62aeb101bea0b52/rfc4122.b77d81f3-3fa3-4945-aef8-9a9cc60b0633': No such file or directory
TASK ERROR: unable to create VM 104 - cannot import from 'lsgprodesx03:ha-datacenter/Pure-vvol/rfc4122.1c03000d-3acc-4651-9190-1b12ace04994/lsgnetbox.lstaff.com.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O qcow2 /run/pve/import/esxi/lsgprodesx03/mnt/ha-datacenter/Pure-vvol/rfc4122.1c03000d-3acc-4651-9190-1b12ace04994/lsgnetbox.lstaff.com.vmdk zeroinit:/mnt/pve/datastore/images/104/vm-104-disk-0.qcow2' failed: exit code 1

If I migrate the storage from the Pure Array to the local datastore, I can migrate successfully. Is this is a known issue, or am I doing something wrong?
No, you're not doing anything wrong, but you don't seem to have studied the Storge matter in depth. (this is not an accusation)

You seem to be using VMware VVOLs, which is a different type of VM storage.
Classically, you have a datastore formatted with VNFS and VMDK files on it, with VVOL, VVOLs (LUNs) are passed directly to the VM as a virtual disk.
For this reason, the importer naturally cannot find a file to import.

If you migrate the VM to a normal datastore (can also be a LUN on the Pure), the virtual disk is converted back into a VMDK and can be processed by the importer.
 
  • Like
Reactions: kellogs and huafist
No, you're not doing anything wrong, but you don't seem to have studied the Storge matter in depth. (this is not an accusation)

You seem to be using VMware VVOLs, which is a different type of VM storage.
Classically, you have a datastore formatted with VNFS and VMDK files on it, with VVOL, VVOLs (LUNs) are passed directly to the VM as a virtual disk.
For this reason, the importer naturally cannot find a file to import.

If you migrate the VM to a normal datastore (can also be a LUN on the Pure), the virtual disk is converted back into a VMDK and can be processed by the importer.
ahhh... that makes perfect sense.
You are correct, there was a gap in my knowledge on the functionality of VVOLs. Thank you.
 
  • Like
Reactions: kellogs
Hi everyone!

I was struggling with this issue as well - not being able to connect to an old ESXi 6.5 server. Adding the SOAP setting didn't help; I was always getting a "Connection Timeout" error.

Running journalctl -xe revealed a bit more, including a "rate limited" error, and crucially, which datastore was causing the issue. Upon inspecting the ESXi datastore, I discovered that the problem was with the datastore itself - a NFS mount that wasn't working due to a changed path on connected NAS. Unfortunately, ESXi didn't provide any helpful information or errors of this. Identifying which VMs were connected to this datastore was a challenge (I had to go through 50 VMs manually). After removing all mounts to the problematic datastore and deleting it from the system, Proxmox was finally able to connect and migrate as usual.

Just a heads up: make sure all datastores are accessible from your ESXi before troubleshooting further.
 
Hello everyone.

For me, adding the SOAP options doesn't work either, and I always get the Timeout message.

Unfortunately, my scenario is a bit worse, because my ESXI is 6.0 (neither 6.5 nor 6.7). For ESXI 6.0, should this SOAP trick work too?

I have the following message in the logs on my pve01:
Code:
Sep 02 13:12:34 pve01 systemd[1]: Started session-247.scope - Session 247 of User root.
Sep 02 13:12:34 pve01 login[2728615]: ROOT LOGIN  on '/dev/pts/0'
Sep 02 13:13:00 pve01 pvedaemon[2695659]: command '/bin/umount /run/pve/import/esxi/teste/mnt' failed: exit code 32
Sep 02 13:15:24 pve01 systemd[1]: session-247.scope: Deactivated successfully.
Sep 02 13:15:24 pve01 systemd[1]: session-247.scope: Consumed 1.919s CPU time.
Sep 02 13:15:24 pve01 systemd-logind[666]: Session 247 logged out. Waiting for processes to exit.
Sep 02 13:15:24 pve01 systemd-logind[666]: Removed session 247.

Thanks for help.
 
Hello everyone.

For me, adding the SOAP options doesn't work either, and I always get the Timeout message.

Unfortunately, my scenario is a bit worse, because my ESXI is 6.0 (neither 6.5 nor 6.7). For ESXI 6.0, should this SOAP trick work too?

I have the following message in the logs on my pve01:
Code:
Sep 02 13:12:34 pve01 systemd[1]: Started session-247.scope - Session 247 of User root.
Sep 02 13:12:34 pve01 login[2728615]: ROOT LOGIN  on '/dev/pts/0'
Sep 02 13:13:00 pve01 pvedaemon[2695659]: command '/bin/umount /run/pve/import/esxi/teste/mnt' failed: exit code 32
Sep 02 13:15:24 pve01 systemd[1]: session-247.scope: Deactivated successfully.
Sep 02 13:15:24 pve01 systemd[1]: session-247.scope: Consumed 1.919s CPU time.
Sep 02 13:15:24 pve01 systemd-logind[666]: Session 247 logged out. Waiting for processes to exit.
Sep 02 13:15:24 pve01 systemd-logind[666]: Removed session 247.

Thanks for help.
I haven't tried 6.0 yet. Is an in-place upgrade to 6.5 or 6.7 an option for you?
 
I haven't tried 6.0 yet. Is an in-place upgrade to 6.5 or 6.7 an option for you?
Hi.
Unfortunately not. With the purchase of Vmware by Broadcom, everything became very nebulous.

Here is the output of my config.xml and vpxa.cfg files, to check that the SOAP options are inserted in the correct positions..

hostd/config.xml
Code:
     <rootPasswdExpiration>false</rootPasswdExpiration>              
                                                       
      <soap>          
        <sessionTimeout>0</sessionTimeout>
        <maxSessionCount>0</maxSessionCount>                        
      </soap>              
                                   
      <ssl>                                                        
          <doVersionCheck> false </doVersionCheck>
          <useCompression>true</useCompression>
          <libraryPath>/lib/</libraryPath>          
      </ssl>
                                             
      <vmdb>                                          
         <!-- maximum number of VMDB connections allowed -->
         <!-- <maxConnectionCount>100</maxConnectionCount> -->
      </vmdb>

vpxa/vpxa.cfg
Code:
 <vmacore>                          
    <http>                  
      <defaultClientPoolConnectionsPerServer>300</defaultClientPoolConnectionsPerServer>
    </http>
    <soap>                                          
      <maxSessionCount>6000</maxSessionCount>
      <sessionTimeout>1440</sessionTimeout>                
    </soap>
    <ssl>
      <doVersionCheck>false</doVersionCheck>
    </ssl>                    
    <threadPool>
 
Hello everyone.

For me, adding the SOAP options doesn't work either, and I always get the Timeout message.

Unfortunately, my scenario is a bit worse, because my ESXI is 6.0 (neither 6.5 nor 6.7). For ESXI 6.0, should this SOAP trick work too?

I have the following message in the logs on my pve01:
Code:
Sep 02 13:12:34 pve01 systemd[1]: Started session-247.scope - Session 247 of User root.
Sep 02 13:12:34 pve01 login[2728615]: ROOT LOGIN  on '/dev/pts/0'
Sep 02 13:13:00 pve01 pvedaemon[2695659]: command '/bin/umount /run/pve/import/esxi/teste/mnt' failed: exit code 32
Sep 02 13:15:24 pve01 systemd[1]: session-247.scope: Deactivated successfully.
Sep 02 13:15:24 pve01 systemd[1]: session-247.scope: Consumed 1.919s CPU time.
Sep 02 13:15:24 pve01 systemd-logind[666]: Session 247 logged out. Waiting for processes to exit.
Sep 02 13:15:24 pve01 systemd-logind[666]: Removed session 247.

Thanks for help.
After you did the option, did you also either reboot or at least reload the management agents? https://knowledge.broadcom.com/external/article/320280/restarting-the-management-agents-in-esxi.html

Also, one other option you have NOW [1] would be to use Veeam to backup and restore. Veeam 12 supports both esx 6.x and proxmox [2] (even the community edition which is free up to 10 nodes (and even that you can "cheat" somewhat by just revoking licenses from servers already migated)) which MIGHT let you migrate between the two. You would just need some temporary storage for each of your servers, or if you already have Veeam 12, you could try it from there anyway.

[1] https://forum.proxmox.com/threads/v...ry-supports-proxmox-as-of-28-aug-2024.153571/
[2] https://www.veeam.com/products/veeam-data-platform/system-requirements.html

(EDIT: Of course I know it would be a longer process and not as neat, but I know from experience that, if you run with outdated software and want to upgrade, sometimes "neat" is not an option)
 
Last edited:
Hi.
Unfortunately not. With the purchase of Vmware by Broadcom, everything became very nebulous.

Here is the output of my config.xml and vpxa.cfg files, to check that the SOAP options are inserted in the correct positions..

hostd/config.xml
Code:
     <rootPasswdExpiration>false</rootPasswdExpiration>             
                                                      
      <soap>         
        <sessionTimeout>0</sessionTimeout>
        <maxSessionCount>0</maxSessionCount>                       
      </soap>             
                                  
      <ssl>                                                       
          <doVersionCheck> false </doVersionCheck>
          <useCompression>true</useCompression>
          <libraryPath>/lib/</libraryPath>         
      </ssl>
                                            
      <vmdb>                                         
         <!-- maximum number of VMDB connections allowed -->
         <!-- <maxConnectionCount>100</maxConnectionCount> -->
      </vmdb>

vpxa/vpxa.cfg
Code:
 <vmacore>                         
    <http>                 
      <defaultClientPoolConnectionsPerServer>300</defaultClientPoolConnectionsPerServer>
    </http>
    <soap>                                         
      <maxSessionCount>6000</maxSessionCount>
      <sessionTimeout>1440</sessionTimeout>               
    </soap>
    <ssl>
      <doVersionCheck>false</doVersionCheck>
    </ssl>                   
    <threadPool>
I assume you have rebooted the server. Don't just restart the services. Reboot.
Do want you need to get those VMs moved.
There are 30 day ESXi trial licenses. Install to a different boot drive 6.7 or 7.0 trial. Worst case, just scp the disk images and run qemu-img convert or use Starwinds V2V https://www.starwindsoftware.com/starwind-v2v-converter
There are lots of ways to get this done.
 
I would look into using vmware's own ovf tool which will let you export the VMs directly out of vmware and then you can use qemu-img convert to convert them over. I've done this a few times way before the native tool was added to ProxMox.

You run the tool directly on PVE which will connect to the vmware host server. You can run this on any Debian based VM as well. Just make sure you have plenty of storage to hold the vmware vms till it's converted over.

https://developer.broadcom.com/tools/open-virtualization-format-ovf-tool/latest
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!