Feedback on esxi-import tool

jsterr

Renowned Member
Jul 24, 2020
787
223
68
33
Hello Proxmox,

heres my feedback on the new tool (thanks for doing):

Code:
scsi0: successfully created disk 'vm_nvme:vm-107-disk-0,size=16G'
kvm: -drive file.filename=rbd:vm_nvme/vm-107-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/vm_nvme.keyring,if=none,id=drive-scsi0,format=alloc-track,file.driver=rbd,cache=none,aio=io_uring,file.detect-zeroes=on,backing=drive-scsi0-restore,auto-remove=on: warning: RBD options encoded in the filename as keyvalue pairs is deprecated
restore-scsi0: transferred 0.0 B of 16.0 GiB (0.00%) in 0s
restore-scsi0: stream-job finished
restore-drive jobs finished successfully, removing all tracking block devices
An error occurred during live-restore: VM 107 qmp command 'blockdev-del' failed - Node 'drive-scsi0-restore' is busy: node is used as backing hd of '#block289'

TASK ERROR: live-restore failed

What does this mean? Used the defaults + live-restore checkbox. VM is running on esxi. Seems like the error does not happen when the vm on esxi-site is shut down. Is this how it should be? If yes, then I might misunderstand the "live-restore"? I read Thomas explanation in the news-post, but I get the error from above.

Offline-Migration is currently running right now:

Code:
transferred 15.2 GiB of 16.0 GiB (95.09%)
transferred 15.4 GiB of 16.0 GiB (96.09%)
transferred 15.5 GiB of 16.0 GiB (97.09%)
transferred 15.7 GiB of 16.0 GiB (98.10%)
transferred 15.9 GiB of 16.0 GiB (99.10%)
transferred 16.0 GiB of 16.0 GiB (100.00%)
transferred 16.0 GiB of 16.0 GiB (100.00%)
scsi0: successfully created disk 'vm_nvme:vm-107-disk-0,size=16G'
TASK OK

Thanks Jonas
 
Last edited:
Used the defaults + live-restore checkbox. VM is running on esxi. Seems like the error does not happen when the vm on esxi-site is shut down. Is this how it should be? If yes, then I might misunderstand the "live-restore"? I read Thomas explanation in the news-post, but I get the error from above.
Quoting @t.lamprecht from the other thread:
Note that you can live-import VMs. This means that you can stop the VM on the ESXi source and then immediately start it on the Proxmox VE target with the disk data required for booting then being fetched on demand. The feature works similarly to live-restore that we provide for PBS backups since a while.

So yes, the source VM needs to be powered down but will be powered on as soon as the import is started. :)
 
  • Like
Reactions: jsterr
Quoting @t.lamprecht from the other thread:


So yes, the source VM needs to be powered down but will be powered on as soon as the import is started. :)

Ah ok, did not see this, as the qm locking icon appeared on the vm (which is correct) but didnt see that it was automatically powered on.
So whats the usecase, will it automatically fetch all data that is requested no matter what? In production enviroments what would happen to a database server that I want to "live-migrate"? Will only base os be loaded (if yes, whats the usecase for that?)

Sorry for the questions, but I also always wondered myself about this on the pbs (live restore)

Edit: Thats how it looks on my site (debian 12 testvm hdd16gb) ram and live-restore:

1711565515631.png

Edit2: I did a reboot of the vm after that (while still importing) now Im at a prompt:

1711565624080.png
 
Last edited:
Im logged in but theres no network etc. whats the usecase for the live-restore? Would really like to know! Thanks!
(still importing)
 
Please check out the guide and especially the post migration steps: https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Post_Migration

You should be able to adapt these settings and issue reboots from within the guest while the import is ongoing. Sometimes it is necessary to adapt the UEFI settings, if the guest OS did not place the bootloader in the default path as well. All those topics should be covered and linked to in the migration guide and the warnings that might be shown in the import window.

If you have a VM with huge disks this is an option to keep the downtime as low as possible. Otherwise, in an offline import, you need to wait until the disks are fully transferred before you can power it up again. Which might be many hours for large VMs.

So whats the usecase, will it automatically fetch all data that is requested no matter what? In production enviroments what would happen to a database server that I want to "live-migrate"? Will only base os be loaded (if yes, whats the usecase for that?)

As with the live restore from a PBS, the VM is powered on and Qemu is transferring from the source disk image (now the fuse FS instead of a mapped backup from PBS) to the target storage. New write operations go directly to the new target disk image. All disk images are imported at the same time (check the task log) on a live import.
Depending on the load the VM and its services produce, you might see slower operation, especially at the beginning when most of the data the VM still wants needs to be fetched from the source.

Also keep in mind the warning, if the import aborts for some reason (unstable network for example), the newly written data will be lost. Therefore, the note in the migration guide:
Note: should the import fail, all data written since the start of the import will be lost. That's why we recommend testing this mechanism on a test VM, and to avoid using it in networks with low bandwidth and/or high error rates.
 
  • Like
Reactions: jsterr
I see that during live import windows vm startup is really, really really slow (after 16GiB of 80 total i can only ping the vm, but no login screen is present (only the boot circle spinning).
 
I see that during live import windows vm startup is really, really really slow (after 16GiB of 80 total i can only ping the vm, but no login screen is present (only the boot circle spinning).
You need high bandwith network between the esx and the pve host, also live import is risky to use, as a import failure might lead to data loss of newly written data while importing.
 
You need high bandwith network between the esx and the pve host, also live import is risky to use, as a import failure might lead to data loss of newly written data while importing.
I have 4x1Gbit ports (both on vmware and a single PVE host. I know that is risky but data doesn't change during use (is only an application server).
 
I have 4x1Gbit ports (both on vmware and a single PVE host. I know that is risky but data doesn't change during use (is only an application server).
That usually means each TCP connection can only use up to 1 Gbit. Depending on how many connections the live import will use and whether connections between two hosts will use different links (LACP hashing doesn't always include source and/or destination port) you might be only getting 1Gbit speeds between VMWare and PVE in a setup like that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!