G'day there,
With the help of the Proxmox community and @Dominic we were able to migrate our ESXi VMs across to PVE - thank you!
https://forum.proxmox.com/threads/migrating-vms-from-esxi-to-pve-ovftool.80655 (for anybody interested)
The migrated (ex-ESXi) VMs are now part of a 3-node PVE cluster, though being the New Year holidays there had to be trouble!
Strangely we're unable to move these imported (from ESXi) VMs to other nodes in the newly-made PVE cluster. All imported VMs are currently on node #2, as nodes #1 and #3 had to be reclaimed, reinstalled (from ESXi) and joined to the PVE cluster (had to bear zero guests to do so).
The VMs are all operational on the PVE node that they were imported to, and boot/reboot without issue.
Our problem is isolated to attempting to migrate them.
Problem we're seeing is:
2020-12-21 00:58:27 starting migration of VM 222 to node ‘pve1’ (x.y.x.y)
2020-12-21 00:58:27 found local disk ‘local-lvm:vm-222-disk-0’ (in current VM config)
2020-12-21 00:58:27 copying local disk images
2020-12-21 00:58:27 starting VM 222 on remote node ‘pve1’
2020-12-21 00:58:29 [pve1] lvcreate ‘pve/vm-222-disk-0’ error: Run `lvcreate --help’ for more information.
2020-12-21 00:58:29 ERROR: online migrate failure - remote command failed with exit code 255
2020-12-21 00:58:29 aborting phase 2 - cleanup resources
2020-12-21 00:58:29 migrate_cancel
2020-12-21 00:58:30 ERROR: migration finished with problems (duration 00:00:03)
TASK ERROR: migration problems
Error 255 & attempt to migrate to other host:
Sadly, it looks like it should be reporting a more useful error than what appears to be the final line of lvcreate's error output - "Run 'lvcreate --help' for more information". Looking through other Proxmox Forum threads, the 255 error code seems to cover a few situations so we're unclear as to exactly what's gone wrong.
The error flow above is the same if we attempt with another of the 4x imported VMs, even if we try to send them to the alternative spare host. Does that likely point to a setting/issue that has to do with the migration in from ESXi? Whether it's a setting, an incompatibility or otherwise is unclear.
Only peculiarity that we can locate:
All 4x of the imported VMs have disks attached that Proxmox seems to not know the size of. Each VM only has 1x disk, which were carried over via ovftool from ESXi. I'm not sure if that's potentially causing lvcreate on the target node/s to fail due to the disk size not being specified?
EXAMPLE - Imported from ESXi to PVE:
Hard Disk (scsi0) - local-lvm:vm-222-disk-0
EXAMPLE - Created on PVE, never migrated:
Hard Disk (scsi0) - local-lvm:vm-106-disk-0,backup=0,size=800G,ssd=1
Has anyone here had any experience with this? We've made some suggestions in the other thread (linked at the top of this post) about ovftool in the PVE wiki based on our experience. The ESXi/ovftool part of the page looks to have been added into the wiki quite recently.
I can add in other logs/files/etc - not overly sure where to look as log-searching for the job ID didn't give us much additional info.
Hopefully someone is kind enough to shed some light on this for us! Thank you so much, and Happy Holidays!
Cheers,
LinuxOz
With the help of the Proxmox community and @Dominic we were able to migrate our ESXi VMs across to PVE - thank you!
https://forum.proxmox.com/threads/migrating-vms-from-esxi-to-pve-ovftool.80655 (for anybody interested)
The migrated (ex-ESXi) VMs are now part of a 3-node PVE cluster, though being the New Year holidays there had to be trouble!
Strangely we're unable to move these imported (from ESXi) VMs to other nodes in the newly-made PVE cluster. All imported VMs are currently on node #2, as nodes #1 and #3 had to be reclaimed, reinstalled (from ESXi) and joined to the PVE cluster (had to bear zero guests to do so).
The VMs are all operational on the PVE node that they were imported to, and boot/reboot without issue.
Our problem is isolated to attempting to migrate them.
Problem we're seeing is:
2020-12-21 00:58:27 starting migration of VM 222 to node ‘pve1’ (x.y.x.y)
2020-12-21 00:58:27 found local disk ‘local-lvm:vm-222-disk-0’ (in current VM config)
2020-12-21 00:58:27 copying local disk images
2020-12-21 00:58:27 starting VM 222 on remote node ‘pve1’
2020-12-21 00:58:29 [pve1] lvcreate ‘pve/vm-222-disk-0’ error: Run `lvcreate --help’ for more information.
2020-12-21 00:58:29 ERROR: online migrate failure - remote command failed with exit code 255
2020-12-21 00:58:29 aborting phase 2 - cleanup resources
2020-12-21 00:58:29 migrate_cancel
2020-12-21 00:58:30 ERROR: migration finished with problems (duration 00:00:03)
TASK ERROR: migration problems
Error 255 & attempt to migrate to other host:
Sadly, it looks like it should be reporting a more useful error than what appears to be the final line of lvcreate's error output - "Run 'lvcreate --help' for more information". Looking through other Proxmox Forum threads, the 255 error code seems to cover a few situations so we're unclear as to exactly what's gone wrong.
The error flow above is the same if we attempt with another of the 4x imported VMs, even if we try to send them to the alternative spare host. Does that likely point to a setting/issue that has to do with the migration in from ESXi? Whether it's a setting, an incompatibility or otherwise is unclear.
Only peculiarity that we can locate:
All 4x of the imported VMs have disks attached that Proxmox seems to not know the size of. Each VM only has 1x disk, which were carried over via ovftool from ESXi. I'm not sure if that's potentially causing lvcreate on the target node/s to fail due to the disk size not being specified?
EXAMPLE - Imported from ESXi to PVE:
Hard Disk (scsi0) - local-lvm:vm-222-disk-0
EXAMPLE - Created on PVE, never migrated:
Hard Disk (scsi0) - local-lvm:vm-106-disk-0,backup=0,size=800G,ssd=1
Has anyone here had any experience with this? We've made some suggestions in the other thread (linked at the top of this post) about ovftool in the PVE wiki based on our experience. The ESXi/ovftool part of the page looks to have been added into the wiki quite recently.
I can add in other logs/files/etc - not overly sure where to look as log-searching for the job ID didn't give us much additional info.
Hopefully someone is kind enough to shed some light on this for us! Thank you so much, and Happy Holidays!
Cheers,
LinuxOz
Attachments
Last edited: