New Import Wizard Available for Migrating VMware ESXi Based Virtual Machines

just missed the vlan field in addition to the bridge configuration field. I can't try live migration whitout this I think.
That sounded sensible, added in pve-manager version 8.1.10 that just got uploaded.


Note that there are also some other updates that just got pushed out, most importantly some bug-fixes for the UI (e.g., changing stuff after having selected the "Resulting Config" tab once was rather broken), allowing one to override the port used to connect to ESXi and also trying to use HTTP/2 to reduce the amounts of streams, but as only ESXi 8 seems to have support for HTTP/2 it is only that version.
We're still looking into further improving w.r.t. running into the sometimes low rate-limiting of ESXi, especially for older ESXi versions. FWIW, for ESXi 7.0 there is a Config.HostAgent.vmacore.soap.maxSessionCount option (under Host -> Manage -> System -> Advanced Settings) that one could increase as workaround.

After the update you currently need to disable the storage, wait a few seconds and enable it again to ensure that the new code runs.
 
Last edited:
  • Like
Reactions: Tmanok and devaux
I have tried twice and both time get rate limited and fail after about 5GB. Is there any way to slow it down so it wont trigger vSphere's limit or disable the limit entirely on the vSphere side? I'm running VMware ESXi, 6.7.0, 17700523.

From a standalone Proxmox server:

Code:
transferred 0.0 B of 128.0 KiB (0.00%)
transferred 128.0 KiB of 128.0 KiB (100.00%)
transferred 128.0 KiB of 128.0 KiB (100.00%)
efidisk0: successfully created disk 'local-fast:vm-106-disk-0,size=1M'
create full clone of drive (vSphere:ha-datacenter/SSD2/Kali 2024/Kali 2024.vmdk)
transferred 0.0 B of 120.0 GiB (0.00%)
transferred 1.2 GiB of 120.0 GiB (1.00%)
transferred 2.4 GiB of 120.0 GiB (2.00%)
transferred 3.6 GiB of 120.0 GiB (3.00%)
transferred 4.8 GiB of 120.0 GiB (4.00%)
qemu-img: error while reading at byte 5402262528: Function not implemented
TASK ERROR: unable to create VM 106 - cannot import from 'vSphere:ha-datacenter/SSD2/Kali 2024/Kali 2024.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -t none -f vmdk -O raw '/run/pve/import/esxi/vSphere/mnt/ha-datacenter/SSD2/Kali 2024/Kali 2024.vmdk' zeroinit:/dev/zvol/local-fast/vm-106-disk-1' failed: exit code 1

From a Proxmox server running inside the vSphere I am targeting:
Code:
transferred 0.0 B of 128.0 KiB (0.00%)
transferred 128.0 KiB of 128.0 KiB (100.00%)
transferred 128.0 KiB of 128.0 KiB (100.00%)
efidisk0: successfully created disk 'local-lvm:vm-100-disk-0,size=4M'
create full clone of drive (vSphere:ha-datacenter/SSD2/Kali 2024/Kali 2024.vmdk)
  Logical volume "vm-100-disk-1" created.
transferred 0.0 B of 120.0 GiB (0.00%)
transferred 1.2 GiB of 120.0 GiB (1.00%)
transferred 2.4 GiB of 120.0 GiB (2.00%)
transferred 3.6 GiB of 120.0 GiB (3.00%)
transferred 4.8 GiB of 120.0 GiB (4.00%)
qemu-img: error while reading at byte 5570034688: Input/output error
  Logical volume "vm-100-disk-0" successfully removed.
  Logical volume "vm-100-disk-1" successfully removed.
TASK ERROR: unable to create VM 100 - cannot import from 'vSphere:ha-datacenter/SSD2/Kali 2024/Kali 2024.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O raw '/run/pve/import/esxi/vSphere/mnt/ha-datacenter/SSD2/Kali 2024/Kali 2024.vmdk' zeroinit:/dev/pve/vm-100-disk-1' failed: exit code 1
 
That sounded sensible, added in pve-manager version 8.1.10 that just got uploaded.


Note that there are also some other updates that just got pushed out, most importantly some bug-fixes for the UI (e.g., changing stuff after having selected the "Resulting Config" tab once was rather broken), allowing one to override the port used to connect to ESXi and also trying to use HTTP/2 to reduce the amounts of streams, but as only ESXi 8 seems to have support for HTTP/2 it is only that version.
We're still looking into further improving w.r.t. running into the sometimes low rate-limiting of ESXi, especially for older ESXi versions. FWIW, for ESXi 7.0 there is a Config.HostAgent.vmacore.soap.maxSessionCount option (under Host -> Manage -> System -> Advanced Settings) that one could increase as workaround.

After the update you currently need to disable the storage, wait a few seconds and enable it again to ensure that the new code runs.
Thanks! Setting "Config.HostAgent.vmacore.soap.maxSessionCount" on the ESXi-Host to "0" solved the issue for me. ESXi 7.0.3.
 
Last edited:
  • Like
Reactions: Tmanok
Thanks! Setting "Config.HostAgent.vmacore.soap.maxSessionCount" on the ESXi-Host to "0" solved the issue for me. ESXi 7.0.3.
Bingo! This fixed it for me too. Also ESXi 7.0.3. Interesting I looked for that setting on one of my ESXi 8 hosts and it does not exist.
Log directly onto the ESXi host | Manage | Advanced - changed from a default of 500 to 0.
 
  • Like
Reactions: Tmanok
Not seeing Config.HostAgent.vmacore.soap.maxSessionCount on 6.7 Update 2 either. But if that 500 is in seconds (8.3 minutes) then that is exactly how long mine runs until it times out.

Mar 28 15:55:22Mar 28 16:03:14node-aroot@pamVM 100 - CreateError: unable to create VM 1...

Added: I don't believe that is a seconds counter.
 
Last edited:
  • Like
Reactions: groque
Did a test import of a Win10 VM and had no major issues. Did have to change the network from vmxnet3 to Virtio to get network (after installing drivers).

Way quicker then exporting/importing the vmdk and using CLI to convert.
 
Not seeing Config.HostAgent.vmacore.soap.maxSessionCount on 6.7 Update 2 either. But if that 500 is in seconds (8.3 minutes) then that is exactly how long mine runs until it times out.

Mar 28 15:55:22Mar 28 16:03:14node-aroot@pamVM 100 - CreateError: unable to create VM 1...
Same on 6.5.0, no such option found.
 
Last edited:
  • Like
Reactions: groque
Same here on VMware ESXi 6.7.0. I've tried for hours with no import going for more then 15.4GB before failing.

Code:
restore-scsi0: transferred 15.4 GiB of 60.0 GiB (25.68%) in 3m 54s
restore-scsi0: transferred 15.4 GiB of 60.0 GiB (25.68%) in 3m 55s
restore-scsi0: transferred 15.4 GiB of 60.0 GiB (25.68%) in 3m 56s
restore-scsi0: transferred 15.4 GiB of 60.0 GiB (25.68%) in 3m 57s
restore-scsi0: stream-job finished
restore-drive jobs finished successfully, removing all tracking block devices
An error occurred during live-restore: VM 7724 qmp command 'blockdev-del' failed - Node 'drive-scsi0-restore' is busy: node is used as backing hd of '#block293'

TASK ERROR: live-restore failed

When trying to migrate without live-restore I get this error:
Code:
transferred 11.4 GiB of 60.0 GiB (19.05%)
transferred 12.0 GiB of 60.0 GiB (20.05%)
transferred 12.6 GiB of 60.0 GiB (21.05%)
qemu-img: error while reading at byte 13723759616: Function not implemented
TASK ERROR: unable to create VM 7724 - cannot import from 'esxi-import01:ha-datacenter/ESXI2/testvm/testvm.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O qcow2 /run/pve/import/esxi/esxi-import01/mnt/ha-datacenter/ESXI2/testvm/testvm.vmdk zeroinit:/mnt/pve/ssd-vms/images/7724/vm-7724-disk-0.qcow2' failed: exit code 1


Most of the time I can't even access the storage as it shows vim.fault.HostConnectFault) { (500)

This was tested on a stable 10Gbps network. I've tried changing the "Config.HostAgent.vmacore.soap.maxSessionCount" parameter on ESXi but it seems like 6.7.0 doesn't have this. Any workaround?
 
Last edited:
Ok new update on this: Seems like a migration from ESX (Intel) to Proxmox (AMD) causes trashy results. I tried to migrate some VMs from Intel ESX to Intel Proxmox and every of them worked fine. I don't know why one of the ESX Intel VMs worked finde on the AMD Proxmox. Crazy things going around here :D
I have migrate in my Test environment from Intel to AMD without issues.
 
Thank you
u/t.lamprecht !

Setting Config.HostAgent.vmacore.soap.maxSessionCount to 50000 from the default of 500 in 7.0 U3 fixed it for me. Since I am decommissioning all of these ESXi servers and retasking them as part of a PVE cluster, I don't care if that's overloading the API. I just want a point-and-click method of migrating a bunch of Linux VMs.
 
  • Like
Reactions: Tmanok and Darkk
I believe I have the fix for the 6.5/6.7 users. I am now able to successfully import on 6.7.
SSH into your ESXi server and run this:

Code:
sed -i 's,</soap>,  <maxSessionCount>0</maxSessionCount>\n    </soap>,' /etc/vmware/hostd/config.xml && /etc/init.d/hostd restart

You can check the hostd config file before and after with this:
Code:
grep -wns '<soap>' /etc/vmware/hostd/config.xml -A 4

Added: It might also need an entry in /etc/vmware/vpxa/vpxa.cfg. Still testing.
 
Last edited:
  • Like
Reactions: Tmanok
Do you mind pasting your soap entries from /etc/vmware/hostd/config.xml? My 6.7.0 server doesnt have any soap xml entries in it. Your grep returned nothing on my box.
 
I have 2 settings I am testing now. After reboot the hostd file change was not enough. I will post again after another reboot and import test in about 30 minutes.

Added: Here is what I am testing

Code:
[root@localhost:~] grep -wns '<soap>' /etc/vmware/hostd/config.xml -A 4
323:    <soap>
324-      <sessionTimeout>0</sessionTimeout>
325-      <maxSessionCount>0</maxSessionCount>
326-    </soap>
327-    <ssl>
[root@localhost:~] grep -wns '<soap>' /etc/vmware/vpxa/vpxa.cfg -A 4
51:    <soap>
52-      <sessionTimeout>1440</sessionTimeout>
53-      <maxSessionCount>6000</maxSessionCount>
54-    </soap>
55-    <ssl>
[root@localhost:~]

[root@localhost:~] /etc/init.d/hostd restart
watchdog-hostd: Terminating watchdog process with PID 2098535
hostd stopped.
hostd started.
[root@localhost:~] /etc/init.d/vpxa restart
watchdog-vpxa: Terminating watchdog process with PID 2099056
vpxa stopped.
vpxa started.
[root@localhost:~]


Added #2: With both of these files adjusted like above imports are working on ESXi 6.7. More testing continues.

Added #3: /etc/vmware/vpxa/vpxa.cfg with <maxSessionCount>0</maxSessionCount> still fails for me, just at a different percentage. So the above with 0 in hostd config.xml and 6000 in vpxa.cfg is what is working best for me at this time. I turn this over to the community for further testing.
 
Last edited:
Attempting to do a live migrate, and I get the following error after clicking the import button on resulting config screen:

Parameter verification failed. (400)

archive: missing property - 'live-restore' requires this property


Did a apt-get upgrade on all nodes (apt-get update didn't update enough) and rebooted all nodes and was able to get it further. Will give more comments on later.
 
Last edited:
I have 2 settings I am testing now. After reboot the hostd file change was not enough. I will post again after another reboot and import test in about 30 minutes.

Added: Here is what I am testing

Code:
[root@localhost:~] grep -wns '<soap>' /etc/vmware/hostd/config.xml -A 4
323:    <soap>
324-      <sessionTimeout>0</sessionTimeout>
325-      <maxSessionCount>0</maxSessionCount>
326-    </soap>
327-    <ssl>
[root@localhost:~] grep -wns '<soap>' /etc/vmware/vpxa/vpxa.cfg -A 4
51:    <soap>
52-      <sessionTimeout>1440</sessionTimeout>
53-      <maxSessionCount>6000</maxSessionCount>
54-    </soap>
55-    <ssl>
[root@localhost:~]

[root@localhost:~] /etc/init.d/hostd restart
watchdog-hostd: Terminating watchdog process with PID 2098535
hostd stopped.
hostd started.
[root@localhost:~] /etc/init.d/vpxa restart
watchdog-vpxa: Terminating watchdog process with PID 2099056
vpxa stopped.
vpxa started.
[root@localhost:~]


Added #2: With both of these files adjusted like above imports are working on ESXi 6.7. More testing continues.

Added #3: /etc/vmware/vpxa/vpxa.cfg with <maxSessionCount>0</maxSessionCount> still fails for me, just at a different percentage. So the above with 0 in hostd config.xml and 6000 in vpxa.cfg is what is working best for me at this time. I turn this over to the community for further testing.

Confirmed working with changes above.

Tested 3 x Linux VM moves from ESXi: 6.7.0 Update 3 (Build 19195723) to Proxmox VE 8.1.8.

Many thanks mram!!
 
  • Like
Reactions: ahayes and Polkaroo
I got a similar issue. Using PVE 8.1.10

transferred 0.0 B of 128.0 KiB (0.00%)
transferred 128.0 KiB of 128.0 KiB (100.00%)
transferred 128.0 KiB of 128.0 KiB (100.00%)
efidisk0: successfully created disk 'datastore:vm-101-disk-0,size=1M'
create full clone of drive (esx68:ha-datacenter/vnx_dstore_31_bastion_perf/HAGENBERG/HAGENBERG_1.vmdk)
transferred 0.0 B of 80.0 GiB (0.00%)
transferred 819.2 MiB of 80.0 GiB (1.00%)
transferred 1.6 GiB of 80.0 GiB (2.00%)
transferred 2.4 GiB of 80.0 GiB (3.00%)
transferred 3.2 GiB of 80.0 GiB (4.00%)
transferred 4.0 GiB of 80.0 GiB (5.00%)
transferred 4.8 GiB of 80.0 GiB (6.01%)
transferred 5.6 GiB of 80.0 GiB (7.01%)
transferred 6.4 GiB of 80.0 GiB (8.01%)
transferred 7.2 GiB of 80.0 GiB (9.01%)
transferred 8.0 GiB of 80.0 GiB (10.01%)
transferred 8.8 GiB of 80.0 GiB (11.01%)
transferred 9.6 GiB of 80.0 GiB (12.01%)
transferred 10.4 GiB of 80.0 GiB (13.01%)
transferred 11.2 GiB of 80.0 GiB (14.01%)
transferred 12.0 GiB of 80.0 GiB (15.01%)
transferred 12.8 GiB of 80.0 GiB (16.02%)
transferred 13.6 GiB of 80.0 GiB (17.02%)
qemu-img: error while reading at byte 15133045248: Input/output error
TASK ERROR: unable to create VM 101 - cannot import from 'esx68:ha-datacenter/vnx_dstore_31_bastion_perf/HAGENBERG/HAGENBERG_1.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -t none -f vmdk -O raw /run/pve/import/esxi/esx68/mnt/ha-datacenter/vnx_dstore_31_bastion_perf/HAGENBERG/HAGENBERG_1.vmdk zeroinit:/dev/zvol/datastore/vm-101-disk-1' failed: exit code 1
 
I have 2 settings I am testing now. After reboot the hostd file change was not enough. I will post again after another reboot and import test in about 30 minutes.

Added: Here is what I am testing

Code:
[root@localhost:~] grep -wns '<soap>' /etc/vmware/hostd/config.xml -A 4
323:    <soap>
324-      <sessionTimeout>0</sessionTimeout>
325-      <maxSessionCount>0</maxSessionCount>
326-    </soap>
327-    <ssl>
[root@localhost:~] grep -wns '<soap>' /etc/vmware/vpxa/vpxa.cfg -A 4
51:    <soap>
52-      <sessionTimeout>1440</sessionTimeout>
53-      <maxSessionCount>6000</maxSessionCount>
54-    </soap>
55-    <ssl>
[root@localhost:~]

[root@localhost:~] /etc/init.d/hostd restart
watchdog-hostd: Terminating watchdog process with PID 2098535
hostd stopped.
hostd started.
[root@localhost:~] /etc/init.d/vpxa restart
watchdog-vpxa: Terminating watchdog process with PID 2099056
vpxa stopped.
vpxa started.
[root@localhost:~]


Added #2: With both of these files adjusted like above imports are working on ESXi 6.7. More testing continues.

Added #3: /etc/vmware/vpxa/vpxa.cfg with <maxSessionCount>0</maxSessionCount> still fails for me, just at a different percentage. So the above with 0 in hostd config.xml and 6000 in vpxa.cfg is what is working best for me at this time. I turn this over to the community for further testing.

I can also confirm this works with ESXi 6.7.0 U3, many thanks!

Had some trouble editing /etc/vmware/vpxa/vpxa.cfg because it was read-only.
Solved it with:
Code:
chmod 666 /etc/vmware/vpxa/vpxa.cfg
 
  • Like
Reactions: Polkaroo
I have 2 settings I am testing now. After reboot the hostd file change was not enough. I will post again after another reboot and import test in about 30 minutes.

Added: Here is what I am testing

Code:
[root@localhost:~] grep -wns '<soap>' /etc/vmware/hostd/config.xml -A 4
323:    <soap>
324-      <sessionTimeout>0</sessionTimeout>
325-      <maxSessionCount>0</maxSessionCount>
326-    </soap>
327-    <ssl>
[root@localhost:~] grep -wns '<soap>' /etc/vmware/vpxa/vpxa.cfg -A 4
51:    <soap>
52-      <sessionTimeout>1440</sessionTimeout>
53-      <maxSessionCount>6000</maxSessionCount>
54-    </soap>
55-    <ssl>
[root@localhost:~]

[root@localhost:~] /etc/init.d/hostd restart
watchdog-hostd: Terminating watchdog process with PID 2098535
hostd stopped.
hostd started.
[root@localhost:~] /etc/init.d/vpxa restart
watchdog-vpxa: Terminating watchdog process with PID 2099056
vpxa stopped.
vpxa started.
[root@localhost:~]


Added #2: With both of these files adjusted like above imports are working on ESXi 6.7. More testing continues.

Added #3: /etc/vmware/vpxa/vpxa.cfg with <maxSessionCount>0</maxSessionCount> still fails for me, just at a different percentage. So the above with 0 in hostd config.xml and 6000 in vpxa.cfg is what is working best for me at this time. I turn this over to the community for further testing.

This looked promising, but i still get rate limit error on ESXi 6.7 U2 and pve-esxi-import-tools 0.6.0
 
This is an absolutely fantastic feature. I just imported a Win2019 VM and the transfer rate was 800Mbit/s vs 120Mbit/s with the ovftool or the VMware GUI export, which both have a built-in rate throttling.

I am not yet convinced that it is possible to make the on-the-fly-exchange of the VirtIO Disk driver work reliably, but it did work on my first text migration.

Congratulations to the programmers, this is a huge step forward!!
 
  • Like
Reactions: fabian
Hi,

Can someone help me?

There is no ESXi option :(
i updated the Node to latest version 8.1.10 and checked if the tool was installed.
"apt install pve-esxi-import-tools " ---> "pve-esxi-import-tools is already the newest version (0.6.0)."

But when i look under Storage and click the Add button there is no ESXI option?
did i miss anything?
1711710221546.png

Edit: i did also the steps on my cluster same thing no esxi option..
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!