New Import Wizard Available for Migrating VMware ESXi Based Virtual Machines

Another way could be also to use nfs shared between proxmox && vmware, migrating first vmware vm to nfs with live storage motion, stop the vm, start the vm on proxmox side , then do a live storage migration from nfs/vmdk to final storage.
I have migrated around 500 vms last year with this method, and it's working really fine, with only some seconds of downtime (only a stop/start of the vm)
Do you leave them in vmdk format? It can cope with the flat file versions and such? I've been doing close to this but still importing the disks to .raw or qcow.
 
Do you leave them in vmdk format? It can cope with the flat file versions and such? I've been doing close to this but still importing the disks to .raw or qcow.
When you migrate to the final Datastore, you can choose the format or the Datastore supports only RAW, like ZFS Pool or Ceph.
 
I have got error when import from esxi 6.5 to proxmox

Code:
create full clone of drive (lab1:ha-datacenter/datastore-local/testa/testa.vmdk)
Formatting '/var/lib/vz/images/753/vm-753-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=214748364800 lazy_refcounts=off refcount_bits=16
transferred 0.0 B of 200.0 GiB (0.00%)
transferred 2.0 GiB of 200.0 GiB (1.00%)
transferred 4.0 GiB of 200.0 GiB (2.00%)
transferred 6.0 GiB of 200.0 GiB (3.00%)
transferred 8.0 GiB of 200.0 GiB (4.00%)
transferred 10.0 GiB of 200.0 GiB (5.00%)
transferred 12.0 GiB of 200.0 GiB (6.01%)
transferred 14.0 GiB of 200.0 GiB (7.01%)
qemu-img: error while reading at byte 16508776960: Input/output error
TASK ERROR: unable to create VM 753 - cannot import from 'lab1:ha-datacenter/datastore-local/testa/testa.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O qcow2 /run/pve/import/esxi/lab1/mnt/ha-datacenter/datastore-local/testa/testa.vmdk zeroinit:/var/lib/vz/images/753/vm-753-disk-0.qcow2' failed: exit code 1

What should i do to fix it?

Thanks
 
Code:
qemu-img: error while reading at byte 16508776960: Input/output error

What should i do to fix it?
One thing worth a try might be to update to the latest pve-esxi-import-tools to version 0.6.1 which got uploaded yesterday evening, after the update you need to disable the ESXi storage and then re-enable it to ensure the new version is used.

That version includes another improvement we found that should help to reduce used sessions dramatically, hopefully also with those old versions, and lots of times this error comes from ESXi being out of sessions.
 
One thing worth a try might be to update to the latest pve-esxi-import-tools to version 0.6.1 which got uploaded yesterday evening, after the update you need to disable the ESXi storage and then re-enable it to ensure the new version is used.

That version includes another improvement we found that should help to reduce used sessions dramatically, hopefully also with those old versions, and lots of times this error comes from ESXi being out of sessions.

Already use version 0.6.1, but the result still same

Code:
root@pve2:~# apt search pve-esxi-import-tools
Sorting... Done
Full Text Search... Done
pve-esxi-import-tools/stable,now 0.6.1 amd64 [installed,automatic]
  Tools to allow importing VMs from ESXi hosts

pve-esxi-import-tools-dbgsym/stable 0.6.1 amd64
  debug symbols for pve-esxi-import-tools

Code:
create full clone of drive (lab1:ha-datacenter/datastore-local/Middleware/Middleware_0.vmdk)
  Logical volume "vm-753-disk-0" created.
transferred 0.0 B of 16.0 GiB (0.00%)
transferred 163.8 MiB of 16.0 GiB (1.00%)
transferred 327.7 MiB of 16.0 GiB (2.00%)
transferred 491.5 MiB of 16.0 GiB (3.00%)
transferred 655.4 MiB of 16.0 GiB (4.00%)
transferred 819.2 MiB of 16.0 GiB (5.00%)
transferred 984.7 MiB of 16.0 GiB (6.01%)
transferred 1.1 GiB of 16.0 GiB (7.01%)
transferred 1.3 GiB of 16.0 GiB (8.01%)
transferred 1.4 GiB of 16.0 GiB (9.01%)
transferred 1.6 GiB of 16.0 GiB (10.01%)
transferred 1.8 GiB of 16.0 GiB (11.01%)
transferred 1.9 GiB of 16.0 GiB (12.01%)
transferred 2.1 GiB of 16.0 GiB (13.01%)
transferred 2.2 GiB of 16.0 GiB (14.01%)
transferred 2.4 GiB of 16.0 GiB (15.01%)
transferred 2.6 GiB of 16.0 GiB (16.02%)
transferred 2.7 GiB of 16.0 GiB (17.02%)
transferred 2.9 GiB of 16.0 GiB (18.02%)
transferred 3.0 GiB of 16.0 GiB (19.02%)
transferred 3.2 GiB of 16.0 GiB (20.02%)
transferred 3.4 GiB of 16.0 GiB (21.02%)
transferred 3.5 GiB of 16.0 GiB (22.02%)
transferred 3.7 GiB of 16.0 GiB (23.02%)
transferred 3.8 GiB of 16.0 GiB (24.02%)
transferred 4.0 GiB of 16.0 GiB (25.02%)
transferred 4.2 GiB of 16.0 GiB (26.03%)
transferred 4.3 GiB of 16.0 GiB (27.03%)
transferred 4.5 GiB of 16.0 GiB (28.03%)
transferred 4.6 GiB of 16.0 GiB (29.03%)
transferred 4.8 GiB of 16.0 GiB (30.03%)
transferred 5.0 GiB of 16.0 GiB (31.03%)
transferred 5.1 GiB of 16.0 GiB (32.03%)
transferred 5.3 GiB of 16.0 GiB (33.03%)
transferred 5.4 GiB of 16.0 GiB (34.03%)
transferred 5.6 GiB of 16.0 GiB (35.03%)
transferred 5.8 GiB of 16.0 GiB (36.04%)
transferred 5.9 GiB of 16.0 GiB (37.04%)
transferred 6.1 GiB of 16.0 GiB (38.04%)
transferred 6.2 GiB of 16.0 GiB (39.04%)
transferred 6.4 GiB of 16.0 GiB (40.04%)
transferred 6.6 GiB of 16.0 GiB (41.04%)
transferred 6.7 GiB of 16.0 GiB (42.04%)
transferred 6.9 GiB of 16.0 GiB (43.04%)
transferred 7.0 GiB of 16.0 GiB (44.04%)
transferred 7.2 GiB of 16.0 GiB (45.04%)
transferred 7.4 GiB of 16.0 GiB (46.04%)
transferred 7.5 GiB of 16.0 GiB (47.05%)
transferred 7.7 GiB of 16.0 GiB (48.05%)
transferred 7.8 GiB of 16.0 GiB (49.05%)
transferred 8.0 GiB of 16.0 GiB (50.05%)
transferred 8.2 GiB of 16.0 GiB (51.05%)
transferred 8.3 GiB of 16.0 GiB (52.05%)
transferred 8.5 GiB of 16.0 GiB (53.05%)
transferred 8.6 GiB of 16.0 GiB (54.05%)
transferred 8.8 GiB of 16.0 GiB (55.05%)
transferred 9.0 GiB of 16.0 GiB (56.05%)
transferred 9.1 GiB of 16.0 GiB (57.06%)
transferred 9.3 GiB of 16.0 GiB (58.06%)
transferred 9.4 GiB of 16.0 GiB (59.06%)
transferred 9.6 GiB of 16.0 GiB (60.06%)
transferred 9.8 GiB of 16.0 GiB (61.06%)
transferred 9.9 GiB of 16.0 GiB (62.06%)
transferred 10.1 GiB of 16.0 GiB (63.06%)
transferred 10.2 GiB of 16.0 GiB (64.06%)
transferred 10.4 GiB of 16.0 GiB (65.06%)
transferred 10.6 GiB of 16.0 GiB (66.06%)
transferred 10.7 GiB of 16.0 GiB (67.07%)
transferred 10.9 GiB of 16.0 GiB (68.07%)
transferred 11.1 GiB of 16.0 GiB (69.07%)
transferred 11.2 GiB of 16.0 GiB (70.07%)
transferred 11.4 GiB of 16.0 GiB (71.07%)
transferred 11.5 GiB of 16.0 GiB (72.07%)
transferred 11.7 GiB of 16.0 GiB (73.07%)
transferred 11.9 GiB of 16.0 GiB (74.07%)
transferred 12.0 GiB of 16.0 GiB (75.07%)
transferred 12.2 GiB of 16.0 GiB (76.07%)
transferred 12.3 GiB of 16.0 GiB (77.08%)
transferred 12.5 GiB of 16.0 GiB (78.08%)
transferred 12.7 GiB of 16.0 GiB (79.08%)
transferred 12.8 GiB of 16.0 GiB (80.08%)
transferred 13.0 GiB of 16.0 GiB (81.08%)
transferred 13.1 GiB of 16.0 GiB (82.08%)
transferred 13.3 GiB of 16.0 GiB (83.08%)
transferred 13.5 GiB of 16.0 GiB (84.08%)
transferred 13.6 GiB of 16.0 GiB (85.08%)
transferred 13.8 GiB of 16.0 GiB (86.08%)
transferred 13.9 GiB of 16.0 GiB (87.08%)
transferred 14.1 GiB of 16.0 GiB (88.09%)
transferred 14.3 GiB of 16.0 GiB (89.09%)
transferred 14.4 GiB of 16.0 GiB (90.09%)
transferred 14.6 GiB of 16.0 GiB (91.09%)
transferred 14.7 GiB of 16.0 GiB (92.09%)
transferred 14.9 GiB of 16.0 GiB (93.09%)
transferred 15.1 GiB of 16.0 GiB (94.09%)
transferred 15.2 GiB of 16.0 GiB (95.09%)
transferred 15.4 GiB of 16.0 GiB (96.09%)
qemu-img: error while reading at byte 16609440256: Input/output error
  Logical volume "vm-753-disk-0" successfully removed.
TASK ERROR: unable to create VM 753 - cannot import from 'lab1:ha-datacenter/datastore-local/Middleware/Middleware_0.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O raw /run/pve/import/esxi/lab1/mnt/ha-datacenter/datastore-local/Middleware/Middleware_0.vmdk /dev/Nas-TrueNas01/vm-753-disk-0' failed: exit code 1

Note :
1. Use local storage and Nas, the result still same qemu-img: error while reading at byte 16609440256: Input/output error
2. For Nas connection use iscsi and the interfaces 2x10G bonding interface
 
When you migrate to the final Datastore, you can choose the format or the Datastore supports only RAW, like ZFS Pool or Ceph.
Thanks but you said down time of a few seconds. You can fire up the proxmox vm using the vmdk disks before you move them back to the final datastore?
 
I have got error when import from esxi 6.5 to proxmox

Code:
create full clone of drive (lab1:ha-datacenter/datastore-local/testa/testa.vmdk)
Formatting '/var/lib/vz/images/753/vm-753-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=214748364800 lazy_refcounts=off refcount_bits=16
transferred 0.0 B of 200.0 GiB (0.00%)
transferred 2.0 GiB of 200.0 GiB (1.00%)
transferred 4.0 GiB of 200.0 GiB (2.00%)
transferred 6.0 GiB of 200.0 GiB (3.00%)
transferred 8.0 GiB of 200.0 GiB (4.00%)
transferred 10.0 GiB of 200.0 GiB (5.00%)
transferred 12.0 GiB of 200.0 GiB (6.01%)
transferred 14.0 GiB of 200.0 GiB (7.01%)
qemu-img: error while reading at byte 16508776960: Input/output error
TASK ERROR: unable to create VM 753 - cannot import from 'lab1:ha-datacenter/datastore-local/testa/testa.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O qcow2 /run/pve/import/esxi/lab1/mnt/ha-datacenter/datastore-local/testa/testa.vmdk zeroinit:/var/lib/vz/images/753/vm-753-disk-0.qcow2' failed: exit code 1

What should i do to fix it?

Thanks

What version of ESXi do you have. Have you adjusted any of the settings on vmware side for timeout, etc? Also, make sure you don't have any snapshots as that increases the problems (or at least lengthens the process to 4x to over 10x as long) for importing.
 
I have got error when import from esxi 6.5 to proxmox

Code:
create full clone of drive (lab1:ha-datacenter/datastore-local/testa/testa.vmdk)
Formatting '/var/lib/vz/images/753/vm-753-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=214748364800 lazy_refcounts=off refcount_bits=16
transferred 0.0 B of 200.0 GiB (0.00%)
transferred 2.0 GiB of 200.0 GiB (1.00%)
transferred 4.0 GiB of 200.0 GiB (2.00%)
transferred 6.0 GiB of 200.0 GiB (3.00%)
transferred 8.0 GiB of 200.0 GiB (4.00%)
transferred 10.0 GiB of 200.0 GiB (5.00%)
transferred 12.0 GiB of 200.0 GiB (6.01%)
transferred 14.0 GiB of 200.0 GiB (7.01%)
qemu-img: error while reading at byte 16508776960: Input/output error
TASK ERROR: unable to create VM 753 - cannot import from 'lab1:ha-datacenter/datastore-local/testa/testa.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O qcow2 /run/pve/import/esxi/lab1/mnt/ha-datacenter/datastore-local/testa/testa.vmdk zeroinit:/var/lib/vz/images/753/vm-753-disk-0.qcow2' failed: exit code 1

What should i do to fix it?

Thanks
Check out post #16 here. In order to avoid the I/O errors, we had to:

Before editing your config file, I would save a copy. Just run cp /etc/vmware/hostd/config.xml /etc/vmware/hostd/config.xml.old

We added <soap> below the initial comments in the xml file (designated by <!-- comment -->)
  • SSH into your vmware host
  • run vi /etc/vmware/hostd/config.xml
  • scroll to where you want to add the soap command
  • press 'i' to edit using vi
  • Add the code below to the xml file:
  • XML:
    <soap>
        <sessionTimeout>0</sessionTimeout>
        <maxSessionCount>0</maxSessionCount>
    </soap>
  • Press esc to exit edit mode in vi
  • Save the file and exit back to the terminal using :wq! and pressing enter
  • Restart hostd by running /etc/init.d/hostd restart
All of this is compiled from other users' comments. I just put it into a step by step format.

This works for vmware 6.5.
 
Last edited:
  • Like
Reactions: obwielnls
latest pve-esxi-import-tools to version 0.6.1 which got uploaded
This solved a lot. I was able to to do two full offline imports after this update. My esxi is 6.7 just for reference. Before this update I never had a successful import at all.

I still can't get the live update to work. Is there an extra step? I don't see that it's taking a snapshot if it's supposed to. (edit here, I didn't RTFM, Let me work on this some more and see how it works)
 
Last edited:
  • Like
Reactions: t.lamprecht
This solved a lot. I was able to to do two full offline imports after this update. My esxi is 6.7 just for reference. Before this update I never had a successful import at all.

I still can't get the live update to work. Is there an extra step? I don't see that it's taking a snapshot if it's supposed to.

I don't think it takes a snapshot. From what I can tell, you have to manually power off the source vm before clicking the import button. Shortly after it will power on the new vm while it's transferring. So it's live during the transfer, but you have to start with it powered off.
 
I don't think it takes a snapshot. From what I can tell, you have to manually power off the source vm before clicking the import button. Shortly after it will power on the new vm while it's transferring. So it's live during the transfer, but you have to start with it powered off.
You are correct. just tell me to RTFM. ;p
 
The only migration that works for me is Linux VM's. Tried three, those work.

All 'non basic' linux VM migrations fail. Most of them with '0.0 B transferred' My experience with everything else so far:
  • The 'soap' fix mentioned here doesnt work.
  • There is no 'etc/vmware/hostd/config.xml' directory on any of the 6.7U3 hosts with that soap syntax (
    Code:
    <sessionTimeout>1440</sessionTimeout>
    as mentioned in the other posts.
  • Though I did find a VMware KB article that pointed me towards /etc/vmware/vpxa/vpxa.cfg. I have no idea if this is the equivalent, or a red herring. The entry was in this file and was modified with the 0 timeout value.
  • The hosts are all on the same LAN, all 10G, no FW in the way.

Any Windows VM, firewall or switch VM, Cisco WLC VM's, 'misc' VM's (such as VMs for UPs management etc) all fail. They all either transfer nothing, as above '0.0 percent transferred'. Or they might move the files across (for the WLC) but it wont boot.

I'm pretty keen to get off 6.7U3, but so far I dont know nearly enough about the inner workings of any of this to advance this any further.
 
Last edited:
The only migration that works for me is Linux VM's. Tried three, those work.

All 'non basic' linux VM migrations fail. Most of them with '0.0 B transferred' My experience with everything else so far:
  • The 'soap' fix mentioned here doesnt work.
  • There is no 'etc/vmware/hostd/config.xml' directory on any of the 6.7U3 hosts with that soap syntax (
    Code:
    <sessionTimeout>1440</sessionTimeout>
    as mentioned in the other posts.
  • Though I did find a VMware KB article that pointed me towards /etc/vmware/vpxa/vpxa.cfg. I have no idea if this is the equivalent, or a red herring. The entry was in this file and was modified with the 0 timeout value.
  • The hosts are all on the same LAN, all 10G, no FW in the way.

Any Windows VM, firewall or switch VM, Cisco WLC VM's, 'misc' VM's (such as VMs for UPs management etc) all fail. They all either transfer nothing, as above '0.0 percent transferred'. Or they might move the files across (for the WLC) but it wont boot.

I'm pretty keen to get off 6.7U3, but so far I dont know nearly enough about the inner workings of any of this to advance this any further.
Might be better to do another post about the specialized vms that will not boot. Them failing to transfer is probably good for this post. That said, what's the current setting for drive controller and network adapaters in vmware? What error do you get when they boot? Can you give more specifics about the vms, what OS and version are they running, etc.?
 
Might be better to do another post about the specialized vms that will not boot. Them failing to transfer is probably good for this post. That said, what's the current setting for drive controller and network adapaters in vmware? What error do you get when they boot? Can you give more specifics about the vms, what OS and version are they running, etc.?
Generally, they simply sit at 0 bytes transferred and wont move further.

There doesn't seem to be an easy way to get logs from proxmox without sending them to a syslog server? I am going to try and set that up today. Happy for any advice on logging. Without the logs I am flying blind a bit.

The WLC image moved across, but wont boot from the disk image, and just 'turns on, like its a new install. I have migrated it twice:

  • These are standard Cisco virtual WLC images. Images are about 8G. I'm not entirely sure what they are running 'under the hood'.
  • Not using live migration option
  • Tried leaving the network at default. It wont boot properly so I dont think this bit matters yet. NICs are e1000.
  • I have tried with the qcow2 format which seems default, and tried with vmdk. I am assuming its changing the destination format on the fly with the migration?
  • The native disk is of course .vmdk on the ESXi side.
  • The SCSCI Controller option is 'LSI 53C895A'. I am reading on this now to see if this might impact the issue.
  • I remove the CDROM from the migration, just trying to minimize any elements that again might have an impact.
I am looking into Cisco forums on this to see if its just a Cisco specific thing with this image. But FW's etc wont move either.

Happy to provide more specifics. And appreciate the help, didnt really expect anyone to reach out. Was more just flagging issues I am seeing in the thread.
 
Check out post #16 here. In order to avoid the I/O errors, we had to:

Before editing your config file, I would save a copy. Just run cp /etc/vmware/hostd/config.xml /etc/vmware/hostd/config.xml.old

We added <soap> below the initial comments in the xml file (designated by <!-- comment -->)
  • SSH into your vmware host
  • run vi /etc/vmware/hostd/config.xml
  • scroll to where you want to add the soap command
  • press 'i' to edit using vi
  • Add the code below to the xml file:
  • XML:
    <soap>
        <sessionTimeout>0</sessionTimeout>
        <maxSessionCount>0</maxSessionCount>
    </soap>
  • Press esc to exit edit mode in vi
  • Save the file and exit back to the terminal using :wq! and pressing enter
  • Restart hostd by running /etc/init.d/hostd restart
All of this is compiled from other users' comments. I just put it into a step by step format.

This works for vmware 6.5.

Already do it, but still got same error

Code:
create full clone of drive (lab1:ha-datacenter/datastore-local/Middleware/Middleware_0.vmdk)
Formatting '/var/lib/vz/images/753/vm-753-disk-0.vmdk', fmt=vmdk size=17179869184 compat6=off hwversion=undefined
transferred 0.0 B of 16.0 GiB (0.00%)
transferred 163.8 MiB of 16.0 GiB (1.00%)
transferred 327.7 MiB of 16.0 GiB (2.00%)
transferred 491.5 MiB of 16.0 GiB (3.00%)
transferred 655.4 MiB of 16.0 GiB (4.00%)
transferred 819.2 MiB of 16.0 GiB (5.00%)
transferred 984.7 MiB of 16.0 GiB (6.01%)
transferred 1.1 GiB of 16.0 GiB (7.01%)
transferred 1.3 GiB of 16.0 GiB (8.01%)
transferred 1.4 GiB of 16.0 GiB (9.01%)
transferred 1.6 GiB of 16.0 GiB (10.01%)
transferred 1.8 GiB of 16.0 GiB (11.01%)
transferred 1.9 GiB of 16.0 GiB (12.01%)
transferred 2.1 GiB of 16.0 GiB (13.01%)
transferred 2.2 GiB of 16.0 GiB (14.01%)
transferred 2.4 GiB of 16.0 GiB (15.01%)
transferred 2.6 GiB of 16.0 GiB (16.02%)
transferred 2.7 GiB of 16.0 GiB (17.02%)
transferred 2.9 GiB of 16.0 GiB (18.02%)
transferred 3.0 GiB of 16.0 GiB (19.02%)
transferred 3.2 GiB of 16.0 GiB (20.02%)
transferred 3.4 GiB of 16.0 GiB (21.02%)
transferred 3.5 GiB of 16.0 GiB (22.02%)
transferred 3.7 GiB of 16.0 GiB (23.02%)
transferred 3.8 GiB of 16.0 GiB (24.02%)
transferred 4.0 GiB of 16.0 GiB (25.02%)
transferred 4.2 GiB of 16.0 GiB (26.03%)
transferred 4.3 GiB of 16.0 GiB (27.03%)
transferred 4.5 GiB of 16.0 GiB (28.03%)
transferred 4.6 GiB of 16.0 GiB (29.03%)
transferred 4.8 GiB of 16.0 GiB (30.03%)
transferred 5.0 GiB of 16.0 GiB (31.03%)
transferred 5.1 GiB of 16.0 GiB (32.03%)
transferred 5.3 GiB of 16.0 GiB (33.03%)
transferred 5.4 GiB of 16.0 GiB (34.03%)
transferred 5.6 GiB of 16.0 GiB (35.03%)
transferred 5.8 GiB of 16.0 GiB (36.04%)
transferred 5.9 GiB of 16.0 GiB (37.04%)
transferred 6.1 GiB of 16.0 GiB (38.04%)
transferred 6.2 GiB of 16.0 GiB (39.04%)
transferred 6.4 GiB of 16.0 GiB (40.04%)
transferred 6.6 GiB of 16.0 GiB (41.04%)
transferred 6.7 GiB of 16.0 GiB (42.04%)
transferred 6.9 GiB of 16.0 GiB (43.04%)
transferred 7.0 GiB of 16.0 GiB (44.04%)
transferred 7.2 GiB of 16.0 GiB (45.04%)
transferred 7.4 GiB of 16.0 GiB (46.04%)
transferred 7.5 GiB of 16.0 GiB (47.05%)
transferred 7.7 GiB of 16.0 GiB (48.05%)
transferred 7.8 GiB of 16.0 GiB (49.05%)
transferred 8.0 GiB of 16.0 GiB (50.05%)
transferred 8.2 GiB of 16.0 GiB (51.05%)
transferred 8.3 GiB of 16.0 GiB (52.05%)
transferred 8.5 GiB of 16.0 GiB (53.05%)
transferred 8.6 GiB of 16.0 GiB (54.05%)
transferred 8.8 GiB of 16.0 GiB (55.05%)
transferred 9.0 GiB of 16.0 GiB (56.05%)
transferred 9.1 GiB of 16.0 GiB (57.06%)
transferred 9.3 GiB of 16.0 GiB (58.06%)
transferred 9.4 GiB of 16.0 GiB (59.06%)
transferred 9.6 GiB of 16.0 GiB (60.06%)
transferred 9.8 GiB of 16.0 GiB (61.06%)
transferred 9.9 GiB of 16.0 GiB (62.06%)
transferred 10.1 GiB of 16.0 GiB (63.06%)
transferred 10.2 GiB of 16.0 GiB (64.06%)
transferred 10.4 GiB of 16.0 GiB (65.06%)
transferred 10.6 GiB of 16.0 GiB (66.06%)
transferred 10.7 GiB of 16.0 GiB (67.07%)
transferred 10.9 GiB of 16.0 GiB (68.07%)
transferred 11.1 GiB of 16.0 GiB (69.07%)
transferred 11.2 GiB of 16.0 GiB (70.07%)
transferred 11.4 GiB of 16.0 GiB (71.07%)
transferred 11.5 GiB of 16.0 GiB (72.07%)
transferred 11.7 GiB of 16.0 GiB (73.07%)
transferred 11.9 GiB of 16.0 GiB (74.07%)
transferred 12.0 GiB of 16.0 GiB (75.07%)
transferred 12.2 GiB of 16.0 GiB (76.07%)
transferred 12.3 GiB of 16.0 GiB (77.08%)
transferred 12.5 GiB of 16.0 GiB (78.08%)
transferred 12.7 GiB of 16.0 GiB (79.08%)
transferred 12.8 GiB of 16.0 GiB (80.08%)
transferred 13.0 GiB of 16.0 GiB (81.08%)
transferred 13.1 GiB of 16.0 GiB (82.08%)
transferred 13.3 GiB of 16.0 GiB (83.08%)
transferred 13.5 GiB of 16.0 GiB (84.08%)
transferred 13.6 GiB of 16.0 GiB (85.08%)
transferred 13.8 GiB of 16.0 GiB (86.08%)
transferred 13.9 GiB of 16.0 GiB (87.08%)
transferred 14.1 GiB of 16.0 GiB (88.09%)
transferred 14.3 GiB of 16.0 GiB (89.09%)
transferred 14.4 GiB of 16.0 GiB (90.09%)
transferred 14.6 GiB of 16.0 GiB (91.09%)
transferred 14.7 GiB of 16.0 GiB (92.09%)
transferred 14.9 GiB of 16.0 GiB (93.09%)
transferred 15.1 GiB of 16.0 GiB (94.09%)
transferred 15.2 GiB of 16.0 GiB (95.09%)
transferred 15.4 GiB of 16.0 GiB (96.09%)
qemu-img: error while reading at byte 16575885824: Input/output error
TASK ERROR: unable to create VM 753 - cannot import from 'lab1:ha-datacenter/datastore-local/Middleware/Middleware_0.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O vmdk /run/pve/import/esxi/lab1/mnt/ha-datacenter/datastore-local/Middleware/Middleware_0.vmdk zeroinit:/var/lib/vz/images/753/vm-753-disk-0.vmdk' failed: exit code 1

I try do live restore, got error to but vm created on pve with state lock. After unlock and try to power on vm, it running. But for the vm that use 2 disk, stuck at maintenance mode. I will try to troubleshoot again.

Code:
restore-scsi0: stream-job finished
restore-drive jobs finished successfully, removing all tracking block devices
An error occurred during live-restore: VM 753 qmp command 'blockdev-del' failed - Node 'drive-scsi0-restore' is busy: node is used as backing hd of '#block295'

TASK ERROR: live-restore failed
 
Last edited:
Generally, they simply sit at 0 bytes transferred and wont move further.
However, this is not a problem for the respective guest OS.
There doesn't seem to be an easy way to get logs from proxmox without sending them to a syslog server? I am going to try and set that up today. Happy for any advice on logging. Without the logs I am flying blind a bit.

The WLC image moved across, but wont boot from the disk image, and just 'turns on, like its a new install. I have migrated it twice:

  • These are standard Cisco virtual WLC images. Images are about 8G. I'm not entirely sure what they are running 'under the hood'.
  • Not using live migration option
  • Tried leaving the network at default. It wont boot properly so I dont think this bit matters yet. NICs are e1000.
  • I have tried with the qcow2 format which seems default, and tried with vmdk. I am assuming its changing the destination format on the fly with the migration?
This is the target format of the disk and has no impact on the migration
  • The native disk is of course .vmdk on the ESXi side.
  • The SCSCI Controller option is 'LSI 53C895A'. I am reading on this now to see if this might impact the issue.
I have no good experience with the emulated LSI controllers.
Since these are Linux or BSD images, it is best to use the VirtIO controller. Linux and BSD already include the driver.

Before you spend too much time troubleshooting, simply use the tried and tested method.
- migrate vSphere VM to NFS share (Storage vMotion)
- mount same NFS Share on PVE
- create PVE VM and attach VMDK to the new VM
- shutdown on vSphere
- power on, on PVE
https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Attach_Disk_.26_Move_Disk_.28minimal_downtime.29
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!