Migrate VM from vSphere to PVE

marchidaniele

Member
Feb 1, 2023
7
0
6
Hi all,
I am migrating a VMware vSphere environment to PVE (latest version) with 3 nodes and ME4024 as iSCSI storage with multipath.
I have some problems with several ubuntu machines.
If I try to import them with the tool from GUI, the VM is correctly imported, but on startup it crashes like this:

1752494485829.png
dmesg:
1752494621938.png


The Vm is a ubuntu 20.04 with qemu-guest-agent package installed with standard ext4 filesystem. From VMware it works without problems (obviously). I have also tried importing the vmdk disk from the vm on my laptop running Debian 12 and converting it to qcow2 (qemu-img convert -p -f vmdk -O qcow2 disk.vmdk disk.qcow2) and using it on a machine with virt-manager and it boots correctly. But on pve don't boot.

What could it be?
 
What could it be?
I assume the virtual disk controller. Ubuntu, as well as RHEL, uses a trimmed down boot image in which unnecessarry drivers have been stripped and maybe driver for the controller was also strippen. This isn't new and other reported that changing the disk controll will help.

Please post the config of the VM and/or change the disk controller type and maybe the disk type too.
 
I tried all kinds of controllers (default, VirtIO SCSI, VirtIO SCSI single, etc...) and all kinds of interfaces (IDE, SATA, SCSI,...) always the same.

The vm's config with IDE e CTRL virtio scsi
Code:
boot:
cores: 2
cpu: host
ide2: none,media=cdrom
memory: 4096
meta: creation-qemu=9.2.0,ctime=1752487213
name: xxxxxxxxx
net0: virtio=BC:24:11:B7:58:F9,bridge=vmbr2
numa: 0
ostype: l26
scsi0: big-LVM:vm-128-disk-0,size=20G
scsihw: virtio-scsi-pci
smbios1: uuid=f024e549-1bd0-4050-9364-f945324dfa08
sockets: 1
vmgenid: 922ec9aa-9899-4aeb-a406-765a0578ba5c
 
I tried all kinds of controllers (default, VirtIO SCSI, VirtIO SCSI single, etc...) and all kinds of interfaces (IDE, SATA, SCSI,...) always the same.
Ah, good to know, yet you haven't written, that you checked with the VMware default SCSI controller, have you?

Try to boot from a recovery boot medium (e.g. install iso) and try to regenerate the initramfs, that should work.
 
It's look like the initramfs don't have the virtio-scsi module. (I have seen this with some almalinux recently coming from vmwarre too)

maybe check this doc, it should work on proxmox too : https://documentation.commvault.com..._vms_for_conversion_to_alibaba_cloud_ecs.html

I tried following the doc by updating initramfs. Reloaded the image, imported into the virtual machine, tried ctrl VirtIO and LSI default, but the system won't boot and always goes to busybox.
As storage I am using LVM via iSCSI. Is there any limitation?
 
I did another test:
if I reimport the disk and use as local-lvm storage the vm ... it boots and starts!!!
Obviously I cannot keep the machines with this problem in local-lvm.

How is this possible?
 
The fact is that if I don't solve this problem I have to cancel the migration program, also because I am just starting out and I have a lot of ubuntu to migrate.
 
I did another test:
if I reimport the disk and use as local-lvm storage the vm ... it boots and starts!!!
Obviously I cannot keep the machines with this problem in local-lvm.

How is this possible?
It is not: the storage used at the PVE host doesn't matter at the VM OS level. And if it is it would be the first time I see something like that... You probably did something different with that VM besides storing it in local-lvm.

If that VM boots correctly, move it now to your iSCSI storage: shut it down, go to the VM hardware tab, click on the disk(s)->Disk actions->Move storage

As mentioned by others, this is typically caused by the initramfs kernel not having the proper driver for the disk controller, so it can't find disks to mount root. It happend to me with RedHat and derivatives (Alma, Oracle), but never with Debian/Ubuntu. When it happened, booting into recovery enviroment allowed me to check that that kernel does in fact see the disks and updating the initramfs did the trick:
Code:
dracut --regenerate-all -f && grub2-mkconfig -o /boot/grub2/grub.cfg

scsi0: big-LVM:vm-128-disk-0,size=20G
You've probably tried already, but besides changing the VM disk controller model, you have to connect the disk to it if you want to use it. In this example, The VM has a "virtio-scsi-pci" controller and the disk is connected to it. Try detaching the disk, edit it and connect to the IDE bus as any kernel should have support for it.
 
sorry man, i see a MDADM Configuration, that will not work without your data on that.
You must change first the /etc/fstab config for dataspace and then..
 
Hi there,

Did you manage to get this issue sorted out in the end? If you’re still having trouble, you might consider trying Vinchin’s migration solution—our tool handles VM migrations from VMware to Proxmox very smoothly. I should mention that I’m part of the Vinchin team, so feel free to reach out if you’d like any help or a trial—no pressure at all if you decide it isn’t the right fit.

Hope you find a good path forward!

Here is our video tutorial: https://www.youtube.com/watch?v=7v21nBySuf0?s=s3j16v81or
Here is our blog on migrating from VMware to Proxmox: https://www.vinchin.com/vm-migration/migrate-vmware-to-proxmox.html?s=mopu2de8r4