[SOLVED] Migrating from Hyper-V to Proxmox disk and skill issues

May 21, 2025
30
4
8
Estonia
Hello

I've been using Hyper-V for years, but since Microsoft no longer offers free Hyper-V since Windows Server 2022 and I couldn't figure out how to configure VLANs on the single active host NIC, AND Windows Server 2019 didn't have drivers for the second NIC, I finally decided to migrate to Proxmox.

I managed to install it, configure the host NIC to be in the correct VLAN, but now I don't seem to understand how can I get my converted Hyper-V images to the large LVM disk I have where I want to restore the VMs.

I've watched several videos in YouTube, they all had just the default large local storage, which doesn't help me.
I know the setup is not exactly excellent right now, but this is my home setup and it has worked well for years on Hyper-V. Until I got myself a fancy switch and decided to start playing with network segmentation :rolleyes:.

I currently have a 120GB drive (/dev/sdc) for Proxmox OS
A 256GB drive for OS images and stuff (/dev/sda, Directory), named Images
And a 1TB drive for VMs (/dev/sdb, LVM, tried thin LVM too), named VMs

And under disks, sdb shows that it isn't mounted, which is why I can't access it? I understand that I'm not supposed to, since the disk format for the test VM was raw?

Following is the content of the VMs, except that it isn't. The Taurus.qcow2 is what I scp'd here remotely and which maxed out the free space and the scp transfer ended with an error due to lack of free space.
I see that there's some sort of linking going on with the test VM to /dev. For a first time setup, this seems like a lot. And looks like it's easy to mess it all up.

Bash:
root@citadel:/dev/VMs# ls -l
total 16359384
-rw-r--r-- 1 root root 16754114560 May 21 00:38 Taurus.qcow2
lrwxrwxrwx 1 root root           7 May 20 21:17 vm-100-disk-0 -> ../dm-9
lrwxrwxrwx 1 root root           8 May 21 00:03 vm-100-disk-1 -> ../dm-12

I have basic Linux usage knowledge, but when it comes to filesystems, I know practically nothing.
 
okay so assuming you didn't change anything after the installation you should actually have a lvm thin-pool on which you want to store the VM disks as I understand you?
To check this, please send the output of pvesm status
Furthermore, if I understand you correctly, you have now formatted the VMs as qcow2 and want to load them onto the Proxmox VE server, are all the disks of your VMs smaller than 1TB in total?
I ask because you said that scp failed.
Are all vms currently still running on the HyperV server and on which disk are the VM disks currently stored?
 
I handled this by converting the hyper v images to qcow2. You can use a free starwind convertor tool to do this. Then create the VM's in proxmox with an empty disk qcow2 format and sata (if windows vm) connector. Use winscp to logon fo proxmox and navigate to /mnt/pve Here you should see the names of the storage you have created and in the /images folder the number of VM eg. /mnt/pve/dev-sdb/images/103 delete or rename the empty qcow2 file and replace it with your converted one with the same name. You should then be able to start the VM. If ok you can use proxmox commands to if you need to chnage the format or move storage etc.
 
I handled this by converting the hyper v images to qcow2. You can use a free starwind convertor tool to do this. Then create the VM's in proxmox with an empty disk qcow2 format and sata (if windows vm) connector. Use winscp to logon fo proxmox and navigate to /mnt/pve Here you should see the names of the storage you have created and in the /images folder the number of VM eg. /mnt/pve/dev-sdb/images/103 delete or rename the empty qcow2 file and replace it with your converted one with the same name. You should then be able to start the VM. If ok you can use proxmox commands to if you need to chnage the format or move storage etc.
you can use " qm disk import <vmid> yourexportedfile.qcow2 <targetstorage> --format raw|qcow2", it'll import the disk, do the format converstion and add the disk to vm configuration .
 
okay so assuming you didn't change anything after the installation you should actually have a lvm thin-pool on which you want to store the VM disks as I understand you?
To check this, please send the output of pvesm status
During the installation, my other disks were actually still NTFS, I formatted them using fdisk, I think, and now I can see them in Proxmox and manage them.
Current pvesm status:
Code:
Name             Type     Status           Total            Used       Available        %
Images            dir     active       245023328        18302168       214201876    7.47%
VMs               lvm     active      1000202240               0      1000202240    0.00%
local             dir     active        40516856         3893588        34532876    9.61%
local-lvm     lvmthin     active        56545280               0        56545280    0.00%

pvdisplay:
Code:
  --- Physical volume ---
  PV Name               /dev/sdc3
  VG Name               pve
  PV Size               118.24 GiB / not usable <3.32 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              30269
  Free PE               3777
  Allocated PE          26492
  PV UUID               xgXfeT-0000-0000-0000-0000-0000-ShipEH
   
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               VMs
  PV Size               <953.87 GiB / not usable <2.34 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              244190
  Free PE               244190
  Allocated PE          0
  PV UUID               xEbiSr-0000-0000-0000-0000-0000-2Fo2Yi

Furthermore, if I understand you correctly, you have now formatted the VMs as qcow2 and want to load them onto the Proxmox VE server, are all the disks of your VMs smaller than 1TB in total?
I ask because you said that scp failed.
Not quite. I exported all my VMs to an external disk, because my initial idea was to either convert them from the disk to Proxmox storage somewhere, or I'd copy them to Proxmox and then convert the images, but since it didn't turn up to be as simple, I converted one of the VMs to qcow2 format on my Windows machine and tried to scp it to Proxmox via scp. So, it failed copying just a single ~40GB disk image.

Are all vms currently still running on the HyperV server and on which disk are the VM disks currently stored?
No, I backed up my VMs and overwrote my Hyper-V instance with Proxmox. I currently have just one server.
If it matters, they were all generation 2 VMs and thus had vhdx disks.
 
I handled this by converting the hyper v images to qcow2. You can use a free starwind convertor tool to do this. Then create the VM's in proxmox with an empty disk qcow2 format and sata (if windows vm) connector. Use winscp to logon fo proxmox and navigate to /mnt/pve Here you should see the names of the storage you have created and in the /images folder the number of VM eg. /mnt/pve/dev-sdb/images/103 delete or rename the empty qcow2 file and replace it with your converted one with the same name. You should then be able to start the VM. If ok you can use proxmox commands to if you need to chnage the format or move storage etc.
The problem right now, I think?, I'm not sure, is that /dev/sdb is not mounted, so it isn't present in /mnt/pve/ either.
The problem wasn't about conversion or transferring, but I had no idea where can I transfer it to. I successfully created the LVM volume?, but I don't know how to access it and when I check under Proxmox node - Disks, I see that it isn't mounted and lsblk shows as much as well.
I deleted and recreated the volume in GUI several times, tried lvm and lvim-thin options, but each time it remained unmounted, so I guessed it must work some other way?
 
okay so if you don't care whether LVM or LVM thin I would recommend LVM thin as it allows thin-provisioning and snapshots.
If there is no important data on the LV VMS, I would recommend wiping the disk again and installing lvm-thin instead.

You can then add the thin pool created as storage to the node under Datacenter -> Storage -> Add LVM-Thin.
The storage should then be displayed on the left under your node and should be accessible.
 
First rule of thumb - only FSs (File Systems) are (usually) mountable.

LVMs (Logical Volume Management) are block devices & not mountable. They can have a volume (LV) on them where that volume is then formatted with an FS which can then be directly mountable.

It appears you setup the disk sdb as an LVM - so naturally it is not going to be mounted.

Using SCP is going to write to a FS, & your .qcow2s are regular files (block-image-in-a-file) that sit on an FS.

Directory storage in Proxmox is an FS.

Hence if you SCP'd your .qcow2 to /mnt/pve you were in fact writing it to the root of your Proxmox host FS - filling it eventually/erroring out the transfer.

To be able to initially transfer that .qcow2 to Proxmox choose a directory type storage that has enough space. It appears you have setup an Images named storage that is a directory type (& is 6 times larger than local/root directory). You probably can use that.

After you have done that - you will create a new VM & then import that .qcow2 to the newly created VM.

After that you can eventually move that file disk in the GUI to the LVM storage if you so desire.


To see how your Proxmox storages are setup & where/if they are mounted you can try:
Code:
cat /etc/pve/storage.cfg



Good luck & happy Proxmoxing.
 
  • Like
Reactions: hd--
you can use " qm disk import <vmid> yourexportedfile.qcow2 <targetstorage> --format raw|qcow2", it'll import the disk, do the format converstion and add the disk to vm configuration .
Thanks a lot! This is exactly what I needed.

I had trouble figuring out the targetstorage part, but some Indian in YouTube helped me out :D.
I ended up running this and it worked. I tried /dev/sdb at first and that of course didn't work and turns out volume group name will do the trick.
qm disk import 100 /mnt/Extreme/Taurus.qcow2 VMs --format raw
It took forever to convert the 60GB image from my external USB drive, but the end result was exactly what I was expecting.
If I don't have file level access to the logical volume, that's fine, but I need to import my Hyper-V images somehow.

Thanks to all the others as well.
Now I have to import the Hyper-V VMs to my computer and turn off Secure Boot because Windows fails to boot and I suspect it's because of that. :confused:

Or it might be because I didn't install the virtio drivers before shutting down the servers? Don't know.
 
turns out volume group name will do the trick.
No. It is the Storage name in Proxmox that "does the trick". It so happens that both of yours are the same name: VMs.

qm disk import 100 /mnt/Extreme/Taurus.qcow2 VMs --format raw
qm disk import 100 /mnt/Extreme/Taurus.qcow2 VMs would have been enough - as assuming that VMs is an LVM, only a raw format can be placed on the LVM - so Proxmox I believe would have defaulted to that.

Windows fails to boot
The qm import disk command you used (normally), will import the disk to the VM as an unused disk - as shown in the qm man page:
qm disk import <vmid> <source> <storage> [OPTIONS]

Import an external disk image as an unused disk in a VM. The image format has to be supported by qemu-img(1).


You will need to attach the disk in the VM's configuration to its bus. You can do this in the GUI.
 
No. It is the Storage name in Proxmox that "does the trick". It so happens that both of yours are the same name: VMs.
ok, thanks!

qm disk import 100 /mnt/Extreme/Taurus.qcow2 VMs would have been enough - as assuming that VMs is an LVM, only a raw format can be placed on the LVM - so Proxmox I believe would have defaulted to that.
That makes sense.

The qm import disk command you used (normally), will import the disk to the VM as an unused disk - as shown in the qm man page:

You will need to attach the disk in the VM's configuration to its bus. You can do this in the GUI.
I did. Windows tries to boot and fails. Boots into Recovery Environment instead.
 
Can you show the output (in the code-editor on this page) for (I'm assuming it is VMID 100, replace if necessary):
Code:
qm config 100
 
Can you show the output (in the code-editor on this page) for (I'm assuming it is VMID 100, replace if necessary):
Code:
qm config 100
Code:
balloon: 0
bios: ovmf
boot: order=ide0;net0
cores: 4
cpu: x86-64-v2-AES
efidisk0: VMs:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide0: Images:iso/virtio-win-0.1.271.iso,media=cdrom,size=709474K
machine: pc-q35-9.2+pve1
memory: 4096
meta: creation-qemu=9.2.0,ctime=1747830574
name: Taurus
net0: virtio=BC:24:11:D4:B0:47,bridge=vmbr0,tag=2
numa: 0
ostype: win11
scsi0: VMs:vm-100-disk-1,backup=0,iothread=1,size=60G
scsihw: virtio-scsi-single
smbios1: uuid=41b8d58d-0000-0000-0000-fc64b4c07091
sockets: 1
vmgenid: f076fb42-0000-0000-0000-62aac8c09358
 
boot: order=ide0;net0
Your imported drive appears to be this one: scsi0: VMs:vm-100-disk-1,backup=0,iothread=1,size=60G.
So you need to place it in the boot options & in the first place. Right now you are booting into the ide0 which is the Cd-rom iso image.

You can do this in the GUI under VM, Options, Boot Order & press the Edit button.
 
Last edited:
I changed the boot order, I tried different SCSI controllers, but the end result is a BSOD about the inaccessible boot device.
1747843571138.png

And when I ran list disk in diskpart in the WinRE, it couldn't find anything.
 
Try removing the EFI disk. Then recreate an EFI disk but choose not to pre-enroll the keys.
 
Try removing the EFI disk. Then recreate an EFI disk but choose not to pre-enroll the keys.
That didn't help.

But success!
I found the following article, where someone had the same issue.
https://broadband9.co.uk/how-to-migrate-hyper-v-vhdx-vm-to-proxmox-qcow2/

I had to inject the virtio drivers into the Windows installation in the WinRE and it was able to boot and I'm at the login screen now.

Looks like I have the basics now to import all my VMs to Proxmox and repair them.
 
You mean an actual Blue Screen Of Death? If that is the case & you say that Windows has gone into Recovery mode - then in fact you have booted that disk - because I see no other bootable disk in your config.
The fact that WinRE does not see the disk is therefore almost certainly a driver issue. (WinRE itself is probably booting from a recovery partition).
You should probably start by loading the VirtIO drivers from Windows Recovery.
You can search online how to do this. Here is an example blog.

Edit: I just posted in time to see you discovered it yourself
 
You mean an actual Blue Screen Of Death? If that is the case & you say that Windows has gone into Recovery mode - then in fact you have booted that disk - because I see no other bootable disk in your config.
The fact that WinRE does not see the disk is therefore almost certainly a driver issue. (WinRE itself is probably booting from a recovery partition).
You should probably start by loading the VirtIO drivers from Windows Recovery.
You can search online how to do this. Here is an example blog.

Edit: I just posted in time to see you discovered it yourself
Beat you to it:D
 
Beat you to it
At least I never had the problem! Your problem - your solution!

Anyway, happy you got it working.

Maybe mark this thread as Solved. At the top of the thread, choose the Edit thread button, then from the (no prefix) dropdown choose Solved.