Thanks for the reply.However, this is not a problem for the respective guest OS.
This is the target format of the disk and has no impact on the migration
I have no good experience with the emulated LSI controllers.
Since these are Linux or BSD images, it is best to use the VirtIO controller. Linux and BSD already include the driver.
Before you spend too much time troubleshooting, simply use the tried and tested method.
- migrate vSphere VM to NFS share (Storage vMotion)
- mount same NFS Share on PVE
- create PVE VM and attach VMDK to the new VM
- shutdown on vSphere
- power on, on PVE
https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Attach_Disk_.26_Move_Disk_.28minimal_downtime.29
The linux VM's move across just fine, this isn't a networking issue. And so I dont understand why I need to make NFS shares/partitions, however as I type this I think I get what you are driving at, make a 'shell' of the VM in PVE and then just move the disk across with NFS and map the newly moved esxi disk to the new PVE VM?
Even if this did work, its a lot of effort to move that many VM's across. What the decs are trying to do is awesome, but it clearly still needs some work.
And thanks again for the reply. Appreciate the effort and certainly helps new users.
EDIT - also when you say "Since these are Linux or BSD images, it is best to use the VirtIO controller. Linux and BSD already include the driver." This is the default controller in all four installs of PVE i have. Why would it be the default if it causes headaches?
Last edited: