New Import Wizard Available for Migrating VMware ESXi Based Virtual Machines

I am trying to connect an EXSi host version 8.0.3 in a VMWare cluster. I am connecting directly to the host, but when I try to import it, I get the following error:

create storage failed: Skipping vCLS agent VM: vCLS-4c4c4544-0058-5410-8057-c3c04f485132 'NoneType' object has no attribute 'files' (500)

I am running pve-exsi-import-tools 7.2.4. Looking at the python script, it has been updated to ignore the vCLS items based on this

with connect_to_esxi_host(connection_args) as connection:
data = {}
for vm in list_vms(connection):
# drop vCLS machines
if is_vcls_agent_vm(vm):
print(f"Skipping vCLS agent VM: {vm.name}", file=sys.stderr)
continue
# drop vms with empty datastore
if is_diskless_vm(vm):
print(f"Skipping diskless VM: {vm.name}", file=sys.stderr)
continue
try:
fetch_and_update_vm_data(vm, data)
except Exception as err:
print(
f"Failed to get info for VM {vm.name}: {err}",
file=sys.stderr,
)

json.dump(data, sys.stdout, indent=2, default=json_dump_helper)

I am working on a production environment and can not remove Retreat at this time. Is that the only option I have right now? Proxmox cannot directly access the datastore on the VMware host, as it can not read the LUN.
 
I am trying to connect an EXSi host version 8.0.3 in a VMWare cluster. I am connecting directly to the host, but when I try to import it, I get the following error:

create storage failed: Skipping vCLS agent VM: vCLS-4c4c4544-0058-5410-8057-c3c04f485132 'NoneType' object has no attribute 'files' (500)

I had that error until I ran
apt install --reinstall pve-esxi-import-tools=0.7.2

To roll it back to an older version.
 
Hi sdigeso,

I assume you are running Proxmox VE 8.x -- is this correct?

The fixes for the behaviour you describe have been applied in v 1.0.1 of pve-esxi-import-tools in Proxmox VE 9.x, but have not been backported to Proxmox VE 8.x yet:
Diff:
@@ -142,7 +142,7 @@ def json_dump_helper(obj: Any) -> Any:
     Raises:
         TypeError: If the conversion of the object is not supported.
     """
-    if dataclasses.is_dataclass(obj):
+    if dataclasses.is_dataclass(obj) and not isinstance(obj, type):
         return dataclasses.asdict(obj)
 
     raise TypeError(
@@ -279,14 +279,23 @@ def main():
     with connect_to_esxi_host(connection_args) as connection:
         data = {}
         for vm in list_vms(connection):
-            # drop vCLS machines
-            if is_vcls_agent_vm(vm):
-                print(f"Skipping vCLS agent VM: {vm.name}", file=sys.stderr)
-                continue
-            # drop vms with empty datastore
-            if is_diskless_vm(vm):
-                print(f"Skipping diskless VM: {vm.name}", file=sys.stderr)
+            # If figuring out any of this fails, we just skip...
+            try:
+                # drop vCLS machines
+                if is_vcls_agent_vm(vm):
+                    print(f"Skipping vCLS agent VM: {vm.name}", file=sys.stderr)
+                    continue
+                # drop vms with empty datastore
+                if is_diskless_vm(vm):
+                    print(f"Skipping diskless VM: {vm.name}", file=sys.stderr)
+                    continue
+            except Exception as err:
+                print(
+                    f"Unexpected error trying to look at VM {vm.name}: {err}",
+                    file=sys.stderr,
+                )
                 continue
+
             try:
                 fetch_and_update_vm_data(vm, data)
             except Exception as err:

You could try applying the changes to the script at `/usr/libexec/pve-esxi-import-tools/listvms.py` by moving the checks for vCLS and VMs with empty_datastore into the `try:` section. You'll most likely not need the first change mentioned in the diff, but it won't bother if you apply it too.

If it -- for some reason -- does not help you can easily revert the changes by running `apt install --reinstall pve-esxi-import-tools`.

Downgrading to 0.7.2 will most likely not help in your case, as 0.7.2 can not handle vCLS machines from newer ESXI-8-clusters (as their disks are not stored on datastores anymore, but on the local ESXI filesystem).

I hope this helps.
 
  • Like
Reactions: mariol
Hi sdigeso,

I assume you are running Proxmox VE 8.x -- is this correct?

The fixes for the behaviour you describe have been applied in v 1.0.1 of pve-esxi-import-tools in Proxmox VE 9.x, but have not been backported to Proxmox VE 8.x yet:
Diff:
@@ -142,7 +142,7 @@ def json_dump_helper(obj: Any) -> Any:
     Raises:
         TypeError: If the conversion of the object is not supported.
     """
-    if dataclasses.is_dataclass(obj):
+    if dataclasses.is_dataclass(obj) and not isinstance(obj, type):
         return dataclasses.asdict(obj)
 
     raise TypeError(
@@ -279,14 +279,23 @@ def main():
     with connect_to_esxi_host(connection_args) as connection:
         data = {}
         for vm in list_vms(connection):
-            # drop vCLS machines
-            if is_vcls_agent_vm(vm):
-                print(f"Skipping vCLS agent VM: {vm.name}", file=sys.stderr)
-                continue
-            # drop vms with empty datastore
-            if is_diskless_vm(vm):
-                print(f"Skipping diskless VM: {vm.name}", file=sys.stderr)
+            # If figuring out any of this fails, we just skip...
+            try:
+                # drop vCLS machines
+                if is_vcls_agent_vm(vm):
+                    print(f"Skipping vCLS agent VM: {vm.name}", file=sys.stderr)
+                    continue
+                # drop vms with empty datastore
+                if is_diskless_vm(vm):
+                    print(f"Skipping diskless VM: {vm.name}", file=sys.stderr)
+                    continue
+            except Exception as err:
+                print(
+                    f"Unexpected error trying to look at VM {vm.name}: {err}",
+                    file=sys.stderr,
+                )
                 continue
+
             try:
                 fetch_and_update_vm_data(vm, data)
             except Exception as err:

You could try applying the changes to the script at `/usr/libexec/pve-esxi-import-tools/listvms.py` by moving the checks for vCLS and VMs with empty_datastore into the `try:` section. You'll most likely not need the first change mentioned in the diff, but it won't bother if you apply it too.

If it -- for some reason -- does not help you can easily revert the changes by running `apt install --reinstall pve-esxi-import-tools`.

Downgrading to 0.7.2 will most likely not help in your case, as 0.7.2 can not handle vCLS machines from newer ESXI-8-clusters (as their disks are not stored on datastores anymore, but on the local ESXI filesystem).

I hope this helps.
Daniel,
You are correct. I was running Proxmox 8.4.11. I will update to 9 and test again. Thank you for your response
 
Thanks for this great feature - import VMs from ESXI to PVE.

Backgroud - I have a VM in ESXI 6.7 with Thin provisioned mode hard disk - 1T. The actually VM disk vmdk file is 70G.
Question - When I import the VM via PVE 8.4 WebGUI, seems it import 1T data from ESXI to PVE, it take a lot of time. Does this is an expected behavior?
Any solution to impor the actually VM size - 70G?
 
Does your PVE storage support thin provisioning?
Did you check the size of the disk after import?

For our migrations we had long migrations time, but the end result were always correctly thin provisioned. IDK if this is an error in the ESXi API or the handling on PVE's side, but it works - as long as your PVE storage supports thin provisioning.

You can use clonezilla to speed up the migrations.
 
Does your PVE storage support thin provisioning?
Did you check the size of the disk after import?

For our migrations we had long migrations time, but the end result were always correctly thin provisioned. IDK if this is an error in the ESXi API or the handling on PVE's side, but it works - as long as your PVE storage supports thin provisioning.

You can use clonezilla to speed up the migrations.
Yes.
 
Migrated my old 2 nodes vmware essential plus vm's (20 vm) to new 3 nodes proxmox 9 + ceph.
Obviously, I had to do it on a day when I could shut down the VMs, but apart from that, everything worked perfectly.
Thank you, thank you, thank you !!! :D:D:D:D:D
 
Hi,

After applying workaround from this (link) proxmox forum post, I was able to start import with "Live Import" while VM being powered OFF on VMware side. However, when I try to power ON that VM on VMware side after few seconds/minutes the import process is interrupted with below.

Code:
Formatting '/mnt/pve/proxmoxHost/images/113/vm-113-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=161061273600 lazy_refcounts=off refcount_bits=16
scsi0: successfully created disk 'proxmoxHost:113/vm-113-disk-0.qcow2,size=150G'
pinning machine type to 'pc-i440fx-10.0+pve1' for Windows guest OS
restore-scsi0: transferred 0.0 B of 150.0 GiB (0.00%) in 0s
restore-scsi0: transferred 96.0 MiB of 150.0 GiB (0.06%) in 1s
restore-scsi0: transferred 192.0 MiB of 150.0 GiB (0.12%) in 2s
restore-scsi0: transferred 288.0 MiB of 150.0 GiB (0.19%) in 3s
restore-scsi0: transferred 352.0 MiB of 150.0 GiB (0.23%) in 4s
restore-scsi0: transferred 368.0 MiB of 150.0 GiB (0.24%) in 5s
(...)
restore-scsi0: transferred 1.2 GiB of 150.0 GiB (0.77%) in 29s
restore-scsi0: transferred 1.2 GiB of 150.0 GiB (0.78%) in 30s
restore-scsi0: transferred 1.2 GiB of 150.0 GiB (0.80%) in 31s
restore-scsi0: Cancelling block job
restore-scsi0: Done.
An error occurred during live-restore: block job (stream) error: restore-scsi0: Input/output error (io-status: ok)

TASK ERROR: live-restore failed

I'm using nfs storage with target format qcow2.

Anyone faced similar problem?
 
I'm using nfs storage with target format qcow2.

Anyone faced similar problem?

I believe the point of the live import is to actually run the VM on the PVE side only. Basically, you're booting the VM off the ESXi host, and it's now the host for the VM. No need to turn it on on the ESXi side any more. In my experience it's usually too slow to even boot and is generally slower than the offline import.
 
  • Like
Reactions: jlauro
I believe the point of the live import is to actually run the VM on the PVE side only. Basically, you're booting the VM off the ESXi host, and it's now the host for the VM. No need to turn it on on the ESXi side any more. In my experience it's usually too slow to even boot and is generally slower than the offline import.
Well... that makes sense.

If we want to have VM fully up during import process how do we make sure VM is up on the network if we uninstall VMware tools - without those, vmxnet3 will go down? We could switch to e1000 or something before migration, but again, in order to configure VirtIO later, we would again have some downtime.
 
Last edited:
Well... that makes sense.

If we want to have VM fully up during import process how do we make sure VM is up on the network if we uninstall VMware tools - without those, vmxnet3 will go down? We could switch to e1000 or something before migration, but again, in order to configure VirtIO later, we would again have some downtime.

Are you talking windows or linux vms? For linux vms I preinstall the virtio drivers before shutdown. Not sure about windows as that's less than 5% of our vms (and generally just take the downtime hit of moving powered off).
 
  • Like
Reactions: Johannes S
@piotrpierzchala

The process I do for Windows is as follows:
1. Add the VirtIO ISO to the VM in VMware
2. Uninstall the VMware Tools
3. Install the VirtIO Guest Tools/Drivers package
4. Shutdown
5. Import to PVE (Make sure to press the prepare for VirtIO SCSI box)
6. Make sure the boot disk is set as SATA, and change the NIC from VMXNet or whatever to paravirtualized
7. If there is only one virtual disk on the system, add a second one but as SCSI (This will make Windows install and activate the SCSI driver)
8. Bootup and login, change the NIC to the IP you need it to be and verify the SCSI disk is showing in disk management
9. Shutdown the VM, detach the boot disk and reattach it as SCSI
10. Bootup and verify everything is good to go
 
  • Like
Reactions: Johannes S
There are a lot of alternative ways to migrate, maybe one of them is more suitable for you. Following for example is good if you want minimal downtime during migration and I remember that @Falk R. mentoned in the German forum that combined with a batch file and other scripting can be automated:
https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Attach_Disk_&_Move_Disk_(minimal_downtime)

Here is a German post where he published an modified virtio iso file with a batch file to install the virtio driver BEFORE the migration thus reducing the downtime even more: