Gen1 VM migrate from Hyper-V 2019 to ProxmoxVE9

Hammer72

New Member
Feb 4, 2026
7
1
1
Hi

It was recommended to post this question in this group rather than the other group I had it in.

I am having some difficulties when trying to migrate a RHEL9 VM from Hyper-V 2019 to Proxmox VE9. Each time I attempt to power on the migrated VM I get errors that /dev/sda2 is missing.
I have run through the migration steps, but was wondering if there is a well hide secret to get this working.

Steps Taken Thus far:
Export Gen1 VM on Hyper-V (vm is powered off and no checkpoints present)
Copy vhdx file to Proxmox to location /var/lib/vz/images
use command <qemu-img convert -p -f vhdx -O raw Test1_hdisk0.vhdx Test1.raw> to convert image
Create a VM shell with id of 512101
Import the raw disk image to new VM shell using command: <qm importdisk 512101 Test1.raw VMstore>. This Completes successfully
Log onto the PVE GUI select the Unused disk and edit the device, accept all the defaults to scsi0
Change the boot order under VM_Name > options > Boot Order > Set scsi0 to first boot device
Start the VM, by default it gets an exception kernel error, reboot into recovery mode run the command:
<dracut -f --add-drivers "virtio virtio_blk virtio_scsi virtio_pci>, followed by
<grub2-mkconfig -o /boot/grub2/grub.cfg> this gives error about /dev/sda2 not found.
If I reboot get same issue.
Reboot into recovery mode Edit the /etc/fstab and comment out the 2 file systems, /home and /var, on the sda2 logical device
System will now boot error free, but in reality I need these file systems to be present.


I have verified that this is a single disk on Hyper-V with no Checkpoints.
I have also attempted the command
<dracut -f --add-drivers "virtio virtio_blk virtio_scsi virtio_pci>
& <grub2-mkconfig -o /boot/grub2/grub.cfg> on VM on Hyper-V before doing the export, hoping to bring all required virtual drivers with the export
Each time I get the same symptom, where both /home and /var cannot be found.

Output of lsblk from Hyper-V VM before migration:
<lsblk>
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 32G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 27G 0 part
├─rhel-root 253:0 0 15G 0 lvm /
├─rhel-swap 253:1 0 2G 0 lvm [SWAP]
├─rhel-var 253:2 0 5G 0 lvm /var
└─rhel-home 253:3 0 5G 0 lvm /home

sr0 11:0 1 1024M 0 rom

The same lsblk command once VM is brought up on Proxmox (/etc/fstab has been edited to exclude /var & /home)
<lsblk>
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 32G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 27G 0 part
├─rhel-root 253:0 0 15G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
sr0 11:0 1 1024M 0 rom

If I remove the comment in /etc/fstab for either of /var or /home and try a manual mount I get the below error.
<mount: /var: special device /dev/mapper/rhel-var does not exist>

In side the directory /dev/mapper I only see root and swap, so error is correct. But what has happened to the 2 file systems during the migration.

Any help or point in the right direction would be appreciated to resolve this issue.
Currently this is a POC to test a migration road map from Hyper-V to Proxmox, so would like to iron out any issues before attempting any real VM work loads.
In the meantime I will investigate vinchin backup that someone else has suggested.
 
Hi there.

Try to use redorescue to create a backup of this VM directly to Proxmox SSH
- Download redorescue: https://sourceforge.net/projects/redobackup/files/latest/download
- Boot the Vm in Hyper-V with this iso and create a backup using SSH session to your Proxmox VE
- In the Proxmox side, create a VM with the disk with the same size
- Boot with redorescue iso and restore the previously backup from Proxmox SSH

After that, perhaps you need to do the dracult thing.
 
Hi All
Just wanted to update my findings on what worked for me.
Below are steps taken to recover a RHEL9 Virtual Machine running on Hyper-V 2019 to Proxmox VE9

1. Copied .vhdx file to /var/lib/vz/images
2. convert image to qcow2
<qemu-img convert -O qcow2 Ken-Test1_hdisk0.vhdx Ken-Test1_hdisk0.qcow2>
3. Create VM shell - machine type i440fx
4. Detach/Remove disk that is created in VM shell
5. Import the newly converted image to empty VM
<qm importdisk 512101 Ken-Test1_hdisk0.qcow2 VMstore> (512101 is the VM ID number)
completes successfully
6. Edit unassigned disk and add to scsi
7. Change boot order that scsi is 1st boot device
8. Boot VM for first time -> Issue with /var & /home logical volumes
Enter root password to access maintenance mode -> Shutdown VM
9. Assign RHEL installation media to the CD ROM
Adjust the boot order again so that CDROM is 1st device
Boot from CD -> Enter Rescue mode -> select Troubleshooting -> Rescue a Red Hat Enterprise Linux system
Next select <Skip to shell>

10. Run commands:
<vgscan --mknodes -v>
<vgchange -a y rhel # rhel is the volume group name>
<mount /dev/mapper/rhel-root /mnt/sysimage.
<mount /dev/sda1 /mnt/sysimage/boot>
<mount --bind /dev /mnt/sysimage/dev>
<mount -t proc none /mnt/sysimage/proc>
<mount --rbind /sys /mnt/sysimage/sys>
<chroot /mnt/sysimage>
<grub2-install /dev/sda>
<grub2-mkconfig -O /boot/grub2/grub.cfg >

# For the next bit, if dracut -f does not work, list initramfs file to find your kernel version
<ls /boot/initramfs-*> Use the version from your list in the command below
<dracut -H -f --kver 5.14.0-611.30.1.el9_7.x86_64 /boot/initramfs-5.14.0-611.30.1.el9_7.x86_64.img>

<lvmdevices --deldev /dev/sda2> answer Y to the question
<lvmdevices>
<lvmdevices --adddev /dev/sda2>
<exit>

11. shutdown <shutdown now>
12. Change boot order
13. start VM

For me it now boots without error and all file systems are available.
Now all that's left is to adjust the network interface details to get ssh working.

Hope this helps some in the future and saves you some time. Took me a week to work out.

Regards
 
  • Like
Reactions: Onslow
Prior to migrating, may need to regenerate the kernel to include all the drivers. Had to do this when migrating to Hyper-V.

So, run the following as root:

Code:
dracut -fv -N --regenerate-all