proxmox (VSphere VMWare to Proxmox migration)

sachinh

New Member
Oct 7, 2025
6
0
1
proxmox (VSphere VMWare to Proxmox migration)

Hello,

We are in the process of migrating our existing VMWare VSphere VMs to proxmox environment. All our VMs disks are placed under common storage across esxi.
As per the documentation, already done these steps,

No vCenter is available.

Shutdown VM (SuSE Linux 10)
Convert the .vmdk disk file to the proxmox compatible .qcow2 image. ( qemu-img convert -p -f vmdk -O qcow2 "<Name>.vmdk" "<Name>.qcow2)
Then attach this .qcow2 image to a new VM created.
Disk controller set to SCSI0
Set the boot order to scsi0
And tried to boot
The grub menu is displayed and the VM started booting but fails as it can not find /dev/sda2 the root disk

Following is already been tried,

Tried different Disk controllers. LSI 53C895A, VirtIO SCSI, VirtIO SCSI Single
But nothing seem to detect the sda.
One observation is, With LSI 53C895a and Disk added as ide0, the system fails to detect sda but if i searched under ls /dev/ i can see the disk is recognised as hda2 , hda3 etc... that's it.

What could be the reason? or how do i resolve this issue with Proxmox. For historical reasons we have to continue running old SLES10 Operating systems in our envionment.

Update: Other SLES15 Systems faced similar problem but after adding the disk with ide0 and not scsi, the system could boot successfully. But with older OS like SLES10 , this method does not seem to work. I have attached the error i get after booting. Also attached the .vmx file of the VM in the VSphere environment.

Thanks.
Regards.
 
This sounds like a problem in the guest OS. You will probably need to get into the guest OS and look at how it is configured to figure out exactly what the problem is. There are two likely candidates I can think of. First, the fstab (and the grub config) might specify partitions with hardware-specific naming like /dev/sda2. This is generally not a good idea because changes to the hardware (or in this case, the virtual hardware) could cause what used to be sda2 to become sdb2, or hda2, or vda2, etc. It is likely that this is why changing your SLES15 VM's disk to IDE helped, because with scsi the disk was called /dev/sda and with IDE it was /dev/hda, and that is what the OS was configured to look for.

Generally it is rare to find systems configured this way now days, but SLES 10 is over 15 years old now and can't really be expected to do anything in the modern way. So if you can get to an emergency shell on that VM and see what it is calling the drives, then edit fstab and grub config to match, it should work.

The other possibility is that the guest OS kernel might not have the drivers for your virtual hardware. It would be a bit unusual but if someone were trying to slim down that VM to use the fewest resources possible they could have decided to leave out things like the virtio_scsi module when building the kernel. Someone might do this if they were sure those wouldn't be needed (for example, if this VM was built as a VM appliance designed for a specific deployment process that would only ever support ESX). This would be much harder to work around, especially since SLES 10 is long past the provider's end-of-life and it will be hard to even find any packages for it. And considering how ancient SLES10 is, it could be a combination of the two. You might need to use IDE to get virtual hardware that the old kernel has a working driver for, and then modify the fstab and grub config to work with that.

You might need the help of an experienced Linux system administrator (or just an old one :) ). If you do find one to help, be sure to also get them to propose a plan to move whatever work this VM does onto a Linux version that isn't EOL, and a plan for continued updating to remain on supported versions in the future.
 
  • Like
Reactions: Johannes S