Hyper-v Gen 2 Windows Guest conversion

sienar

Well-Known Member
Jul 19, 2017
51
8
48
44
Hi all,

My home lab previously consisted of Windows servers, 2012 R2 and 2016, running Hyper-V. I had many guest VMs running that were Gen 2 VMs (aka UEFI with GPT disk partition/layout). After finally starting to tinker with Proxmox when 5.0 came out, I've decided to convert the whole thing to a Proxmox cluster. The only roadblock was converting the existing VMs and getting them running on Proxmox. I've searched on and off for quite a while for a complete guide to converting these VMs that A) actually worked and B) was fairly simple to do. I have not found such a guide anywhere, but piecing together steps from other guides that worked on the individual problems and steps of the conversion, I wrote a complete guide that will get your VMs converted fairly quickly and easily with almost no 3rd party tools involved and almost all of the process happens on the Proxmox host instead of on the Hyper-V host.

Tested with:

  • Proxmox 5.1-41
  • Hyper-V host - Server 2016
  • Gen 2 VMs running:
    • Server 2012 R2
    • Win10
    • Server 2016
Tools needed:
  • Linux live CD with gdisk - knoppix is fine
  • Windows install media 10/2016 will work for 2012
  1. Document critical settings such as NIC configuration that will be lost by migrating to a new hypervisor, local administrator account, etc
  2. Export the Hyper-V VM (got to get the VHD/VHDX disk files)
    1. On the hyper-v host, use the export function to create clean copies of the VHDs of the VM.
    2. Export allows you to leave the VM in place with any snapshots or Hyper-v Replica intact pending confirmation of conversion success
  3. Copy the VHD/VHDX file(s) to the proxmox host
    1. Do this however you want
    2. Check the integrity of the files:
      1. qemu-img check -r all disk1.vhdx
  4. Create a target VM on the proxmox host
    1. Create this VM so that the specs match as closely as possible to your source Hyper-V VM
      1. Cores/ram/etc
      2. Make note of the disk file name
        1. Example: vm-521-disk-1.qcow2
      3. Only create a single disk in this target VM, even if your source has more than one. To get the VM booting, we will focus only on the primary/OS disk of the source VM. We're going to be modifying partition tables, no need to risk the other disks. They can/will be copied over later
    2. Remove the empty disk that is created by the wizard
  5. Import and convert the vhd/vhdx
    1. Proxmox 5.0 includes a command that allows importing and converting a foreign disk image to a VM with one command
    2. qm importdisk <vmid> <source> <storage> [OPTIONS]
      1. <vmid> is the (unique) ID of the VM.
      2. <source> is the path to the disk image to import
      3. <storage> is the target Proxmox storage pool ID
      4. [OPTIONS] includes --format <qcow2 | raw | vmdk>
        1. This is where you specify the format the imported disk will become
    3. The command for our example would be:
      1. qm importdisk 521 disk1.vhdx pvedata --format qcow2
      2. This will import the vhdx to the pvedata Proxmox datastore, convert it to qcow2 format in the process, and add it to the configuration of VM 521 as an unused disk
  6. Configure the disk on the VM
    1. In the Hardware tab of the VM, you will see the imported disk at the bottom as unused disk 0
    2. Double click on the disk, or click on it and hit the edit button, in the Add: Unused Disk window:
      1. Set the Bus/Device to IDE 0
      2. Cache can be left default - If you're on a ZFS filesystem, you likely need to change this to Write through
      3. Other options can be set as desired
    3. In the Options tab of the VM, ensure your Boot Order and other settings are set appropriately
      1. Ex boot order would be ide0, CD-ROM, Network
      2. I also enable Qemu agent as I install that later as well
  7. OPTIONAL - snapshot the VM. If you're not familiar with the steps being done below, it may be beneficial to snapshot the VM at this point. This will allow you to roll back any incorrect changes made in the steps below and try again without waiting to re-import the disk
  8. Configure the VM to boot with the linux live cd/dvd and start it up
    1. I use knoppix as it already has gdisk as part of the DVD image and it boots into a gui
  9. Convert the GPT partition layout to MBR
    1. This will use the gdisk command
    2. Inside the VM, open a terminal and enter the following commands:
      1. OPTIONAL - take a backup of the GPT partition table:
        1. gdisk -b sda-preconvert.gpt /dev/sda
      2. gdisk /dev/sda
        1. Your command prompt will enter gdisk and it will look like:
          1. Command (? for help):
      3. Press r and enter - this takes you to the recovery menu
      4. Press g and enter - this converts the partition table from GPT to MBR
      5. OPTIONAL - press p and enter to preview the new MBR partition table
      6. If your source disk had more than 4 partitions, this conversions gets tricky, more steps are needed that are out of scope here: http://www.rodsbooks.com/gdisk/mbr2gpt.html#gpt2mbr
      7. Press w and enter to write the changes
  10. Fixing Windows boot from this converted disk. At this point you have a disk with your Windows OS, but it still won't boot in this condition. The next steps will walk you through repairing the Windows boot function
    1. Remove the linux live CD image, and load the Windows install disk image
    2. Reboot the VM
    3. Start the Windows Install wizard, but instead of choosing 'Install Now', choose 'Repair your computer'
    4. Click on the Troubleshooting option and open the command prompt
    5. Enter the following command:
      1. diskpart
    6. The next steps can vary, but the simple goal is to use the diskpart tool to mark the partition that holds your C drive as active, and to ensure it is labeled as C
      1. list disk
        1. You should only have the single disk - disk 0
      2. select disk 0
      3. list volume
        1. You likely have one or more other volumes/partitions, such as a the normal Windows Recovery partition
        2. Usually the largest volume on the disk is the one that should be your C drive - in this example, that will be volume 3
        3. If it is already assigned the letter C, move on to marking it as active
        4. If another volume, such as a recovery partition has been assigned C, we need to fix that
        5. Issue these commands to remove the drive letter from a volume
          1. select volume 2
          2. remove letter=c
        6. Do the same for other volumes that aren't supposed to have drive letters
        7. Then assign the C drive letter to the large volume
          1. select volume 3
          2. assign letter=c
      1. Mark your C drive as active
        1. select volume 3
        2. active
      2. Exit diskpart
        1. exit
    7. The next steps will rebuild the BCD store and other critical boot files and settings. Type the following commands:
      1. bootrec /rebuildbcd
        1. This command will scan the disk to locate your windows install and build the BCD store to boot that install
      2. bcdboot C:\windows
        1. Replace c:\windows with the path to your windows install on the disk if it's different
        2. This command ensures that needed boot files are in place on this partition
      3. bootrec /fixmbr
        1. This command writes a master boot record compatible with Windows
      4. bootrec /fixboot
        1. This command writes a new boot sector that is compatible with Windows
  11. Power down the VM, remove the Windows installer image, and test booting the VM normally. At this point it should boot and be able to login.
  12. Install the required drivers for any new hypervisor based hardware inside the VM, such as the NIC or balloon drivers
  13. OPTIONAL - Convert the IDE controller to Virtio SCSI for better performance
    1. Windows does not have a driver for this natively installed, so there are a few simple steps to install the driver and prepare the Windows install to boot from a virtio SCSI disk
    2. Remove any snapshots at this point as editing the VM hardware can cause them to become invalid and require editing the VM config file to manually remove after the fact if they're not removed first
    3. Add a new, empty disk to the VM using the desired controller
    4. In the VM, install the correct driver for the newly added controller and verify you can access the new empty disk.
    5. Shutdown the VM
    6. Detach/delete the new empty disk
    7. Detach, but don't delete, your OS disk
    8. Reattach the OS disk, but choose the new controller type and configuration you want for the disk
    9. Boot the VM
  14. Add any other disks that were converted from the source VM. To do this:
    1. Shutdown the VM
    2. Remove any snapshots at this point as editing the VM hardware can cause them to become invalid and require editing the VM config file to manually remove after the fact if they're not removed first
    3. Add new/empty disks to the VM to match the needed number of disks from the source VM
    4. Repeat the steps from section 6 to copy the converted disks over these new/empty disks
    5. Boot the VM
    6. No partition conversion is necessary for your non-OS disks as Windows can read them fine as is


references/credits:
https://www.servethehome.com/converting-a-hyper-v-vhdx-for-use-with-kvm-or-proxmox-ve/
http://www.firewing1.com/blog/2012/...-layout-without-data-loss-and-gigabyte-hybrid
http://domyeung.blogspot.com/2015/08/converting-hyper-v-guest-to-linux-kvm.html
forum user: aderumier
 
Last edited:
Let me know if you have any suggestions that would improve or simplify the process and I'll add it to the guide.
 
  • Like
Reactions: GadgetPig
4/5/6 : proxmox 5 have "qm importdisk <vmid> <source> <storage>"

(import any disk format to any supported proxmox storage)


Do you have tested to enable ovmf to have uefi support ? (instead converting all to mbr)
 
I tried quite a bit to get OVMF to boot the GPT partitioning that's the default from the Hyper-V Gen2 Windows guests. Never could get it to work, but I don't know how much of that has been caused by the massive problems that Proxmox has been having with Windows guests in general since 5.1. The process I outlined definitely works for me.

qm importdisk <vmid> <source> <storage> [OPTIONS]

Import an external disk image as an unused disk in a VM. The image format has to be supported by qemu-img(1).

The qm importdisk command sounds handy, but doesn't really do what I was trying to do. I haven't tried it, but it sounds like it simply attaches a disk image as is without converting it and leaves it at it's current path. Part of what I was trying to accomplish with my process was to have the disk images converted to a more native disk image format by converting to QCOW2. The command could save you the step of adding a blank disk to the new VM and copying the converted image over it though. I still have more VMs to import after I get my next host converted over to Proxmox, so I'll give that a shot then.
 
The qm importdisk command sounds handy, but doesn't really do what I was trying to do. I haven't tried it, but it sounds like it simply attaches a disk image as is without converting it and leaves it at it's current path. Part of what I was trying to accomplish with my process was to have the disk images converted to a more native disk image format by converting to QCOW2. The command could save you the step of adding a blank disk to the new VM and copying the converted image over it though. I still have more VMs to import after I get my next host converted over to Proxmox, so I'll give that a shot then.

It's allow format conversion too. (--format qcow2)



qm importdisk <vmid> <source> <storage> [OPTIONS]

Import an external disk image as an unused disk in a VM. The image format has to be supported by qemu-img(1).

<vmid>: <integer> (1 - N)
The (unique) ID of the VM.

<source>: <string>
Path to the disk image to import

<storage>: <string>
Target storage ID

--format <qcow2 | raw | vmdk>
Target format
 
Oh shoot, I did not get that at all from the man page. I think on my next import, I will definitely use this and update the guide if it works. One of the reasons I liked convert and copy steps was that it left me an untouched/freshly converted copy so if I trashed the one copied to the VM trying to get it to boot, I could copy it again and start over without converting again. But using the importdisk option, I could just drop those steps and use a snapshot prior to tinkering with the partition table to accomplish the same goal.

Thanks aderumier!
 
Just updated the guide to include using the qm importdisk command suggested by aderumier. Preserving my previous version here as I believe it may work on older versions of Proxmox, although I have not tested that.

  1. Document critical settings such as NIC configuration (will be lost by migrating to a new hypervisor), local administrator account, etc
  2. Export the Hyper-V VM (got to get the VHD/VHDX disk files)
    1. On the hyper-v host, use the export function to create clean copies of the VHDs of the VM.
    2. Export allows you to leave the VM in place with any snapshots or Hyper-v Replica intact pending confirmation of conversion success
  3. Copy the VHD/VHDX file(s) to the proxmox host
    1. Do this however you want
    2. Check the integrity of the files:
      1. qemu-img check -r all disk1.vhdx
  4. Convert the VHD/VHDX file(s) to qcow2 format
    1. I prefer to create the converted disk files side by side with source VHD files and copy them to the target location, preserving the original converted files. This can save redoing this step should have an issue with later steps
    2. Use the qemu-img command line tool to create the qcow2 version of the disk files
      1. qemu-img convert -O qcow2 disk1.vhdx disk1.qcow2
  5. Create a target VM on the proxmox host
    1. Be sure the disks used for this VM is qcow2 file based
    2. Be sure the disk controller type is IDE (more on this later)
    3. Create this VM so that the specs match as closely as possible to your source Hyper-V VM
      1. Cores/ram/etc
      2. Make note of the disk file name
        1. Example: vm-521-disk-1.qcow2
      3. Only create a single disk in this target VM, even if your source has more than one. To get the VM booting, we will focus only on the primary/OS disk of the source VM. We're going to be modifying partition tables, no need to risk the other disks. They can/will be copied over later
  6. Copy the converted qcow2 files over/replacing the freshly created files of the new VM
    1. Find the exact path of the disk files, example commands if you don't already know the path:
      1. find / -name vm-521-disk-1*
      2. locate vm-521-disk-1*
      1. For this example we'll use - /var/lib/vz/images/521/vm-521-disk-1.qcow2
    2. Copy the converted qcow2 file
      1. cp disk1.qcow2 /var/lib/vz/images/521/vm-521-disk-1.qcow2
    3. If you have more than one disk file, again, don't copy the other files yet
  7. Configure the VM to boot with the linux live cd/dvd and start it up
    1. I use knoppix as it already has gdisk as part of the DVD image and it boots into a gui
  8. Convert the GPT partition layout to MBR
    1. This will use the gdisk command
    2. Open a terminal and enter the following commands:
      1. OPTIONAL - take a backup of the GPT partition table:
        1. gdisk -b sda-preconvert.gpt /dev/sda
      2. gdisk /dev/sda
        1. Your command prompt will enter gdisk and it will look like:
          1. Command (? for help):
      3. Press r and enter - this takes you to the recovery menu
      4. Press g and enter - this converts the partition table from GPT to MBR
      5. OPTIONAL - press p and enter to preview the new MBR partition table
      6. If your source disk had more than 4 partitions, this conversions gets tricky, more steps are needed that are out of scope here: http://www.rodsbooks.com/gdisk/mbr2gpt.html#gpt2mbr
      7. Press w and enter to write the changes
  9. Fixing Windows boot from this converted disk. At this point you have a disk with your Windows OS, but it still won't boot in this condition. The next steps will walk you through repairing the Windows boot function
    1. Remove the linux live CD image, and load the Windows install disk image
    2. Reboot the VM
    3. Start the Windows Install wizard, but instead of choosing 'Install Now', choose 'Repair your computer'
    4. Click on the Troubleshooting option and open the command prompt
    5. Enter the following command:
      1. diskpart
    6. The next steps can vary, but the simple goal is to use the diskpart tool to mark the partition that holds your C drive as active, and to ensure it is labeled as C
      1. list disk
        1. You should only have the single disk - disk 0
      2. select disk 0
      3. list volume
        1. You likely have one or more other volumes/partitions, such as a the normal Windows Recovery partition
        2. Usually the largest volume on the disk is the one that should be your C drive - in this example, that will be volume 3
        3. If it is already assigned the letter C, move on to marking it as active
        4. If another volume, such as a recovery partition has been assigned C, we need to fix that
        5. Issue these commands to remove the drive letter from a volume
          1. select volume 2
          2. remove letter=c
        6. Do the same for other volumes that aren't supposed to have drive letters
        7. Then assign the C drive letter to the large volume
          1. select volume 3
          2. assign letter=c
      1. Mark your C drive as active
        1. select volume 3
        2. active
      2. Exit diskpart
        1. exit
    7. The next steps will rebuild the BCD store and other critical boot files and settings. Type the following commands:
      1. bootrec /rebuildbcd
        1. This command will scan the disk to locate your windows install and build the BCD store to boot that install
      2. bcdboot C:\windows
        1. Replace c:\windows with the path to your windows install on the disk if it's different
        2. This command ensures that needed boot files are in place on this partition
      3. bootrec /fixmbr
        1. This command writes a master boot record compatible with Windows
      4. bootrec /fixboot
        1. This command writes a new boot sector that is compatible with Windows
  10. Power down the VM, remove the Windows installer image, and test booting the VM normally. At this point it should boot and be able to login.
  11. Install the required drivers for any new hypervisor based hardware inside the VM, such as the NIC or balloon drivers
  12. OPTIONAL - Convert the IDE controller to Virtio SCSI for better performance
    1. Windows does not have a driver for this natively installed, so there are a few simple steps to install the driver and prepare the Windows install to boot from a virtio SCSI disk
    2. Add a new, empty disk to the VM using the desired controller
    3. In the VM, install the correct driver for the newly added controller and verify you can access the new empty disk.
    4. Shutdown the VM
    5. Detach/delete the new empty disk
    6. Detach, but don't delete, your OS disk
    7. Reattach the OS disk, but choose the new controller type and configuration you want for the disk
    8. Boot the VM
  13. Add any other disks that were converted from the source VM. To do this:
    1. Shutdown the VM
    2. Add new/empty disks to the VM to match the needed number of disks from the source VM
    3. Repeat the steps from section 6 to copy the converted disks over these new/empty disks
    4. Boot the VM
    5. No partition conversion is necessary for your non-OS disks as Windows can read them fine as is
 
Created an account to say thank you. First time tinkering with Proxmox and attempting the same migration. This was extremely helpful.

On a side note, I got a Win10 VM to work fine, but Win2016 still wouldn't boot with the oVirt SCSI driver. Found a fix on StackOverflow. Can't post a link since I'm a new user so here it is:

  1. Open an elevated command prompt and set the VM to boot into safe mode by typing

    bcdedit /set {current} safeboot minimal

  2. shut-down the VM and change the boot device type to virtio.

  3. boot the VM. It will enter in safe mode.

    Note: In Safe mode all boot-start drivers will be enabled and loaded, including the virtio driver. Since there is now a miniport installed to use it, the kernel will now make it part of the drivers that are to be loaded on boot and not disable it again.

  4. in the booted VM reset the bcdedit settings to allow the machine to boot into the Normal mode by typing (in elevated command prompt again):

    bcdedit /deletevalue {current} safeboot

  5. Done.
 
Last edited:
Thanks pixelbaker! That was me about a year ago. I'd never touched Proxmox before v5 came out.

I never ran into that problem with 2016, but thanks for adding to this little migration KB thread here. I love it when I find posts like this that keep me from having to reinvent the wheel!
 
Hello,

Big thanks for that How-To. I will test this, currently I am struggling with my Migration.

Just a question, the migration is a huge change to the system. What’s about the Windows internal unique IDs, or the Windows Activation. Are they changing ?

Best Regards
 
Hey LA-Diego, I just saw your questions, only about 6 months late. But, I can answer those. Windows internal unique IDs don't change in this migration process. They stay the same. As for activation, it will very much depend on how your install was previously activated. If you were using Hyper-V Host activation for instance, you will likely need to change the product key and reactivate the VM. If it the VM was using it's own unique product key, you should be able to just let it reactivate as normal. And if you were using an internal activation/license server, then it should be transparent as the VM will just reactivate automatically with that license server.
 
@sienar

Thanks for a wonderful post. Ultimately it didn't work for me, but I was able to use it as starting point. I'm running 6.1, and used the following process to migrate Windows Server 2019 Gen2 VMs. Hopefully someone will find this helpful.

  1. Export Hyper-V guest from via Hyper-V Manager
  2. SCP/SFTP… vhdx to PVE server
  3. Check integrity of vhdx
    1. qemu-img check -r all disk1.vhdx
  4. Create target VM on proxmox
    1. Mirror original config as best as possible for cpu, ram, etc
    2. Only create with a 1gb IDE disk
    3. Detach, and then remove temporary 1gb IDE disk
  5. Import and convert vhd/vhdx
    1. qm importdisk <vmid> <source disk> <storage> --format qcow2
  6. Configure disk on VM
    1. Configure as IDE 0
    2. Write Back cache enabled
    3. Under options, confirm boot order
      1. IDE0
      2. CD-ROM
      3. Network
  7. Boot VM
  8. Install Drivers
    1. Mount VirtIO Driver ISO
    2. Run d:\virtio-win-gt-x64.msi
      1. Customize as needed (I used the defaults)
  9. Install Guest Agent
    1. Run D:\guest-agent\qemu-ga-x86-64.msi
    2. Dismount the VirtIO Driver ISO
  10. Shutdown the VM
  11. Switch to SCSI for OS Disk
    1. Add Temporary 1GB SCSI disk
      1. Bus/Device - SCSI 0
    2. Boot the VM
    3. Verify SCSI Driver
    4. Log into the VM
    5. Open Device Manager
      1. Expand 'Storage Controllers"
      2. Verify that "Red Hat VirtIO SCSI pass-through controller" is present
    6. Shutdown VM
    7. Remove 1gb temporary SCSI disk
    8. Detach IDE 0 disk
    9. Attach unused disk
      1. Bus/Device - SCSI 0
      2. Cache - Write back
    10. Update VM Options | Boot Order
      1. Set first boot device to - scsi0
    11. Power On
  12. Cleanup Hyper-V Devices
    1. Log into the VM
    2. Open Device Manager
    3. Show Hidden devices
    4. View | Show hidden devices
    5. Remove the following devices (Right click and select Uninstall device only fore grayed out devices):
      1. Disk drives | Microsoft Virtual Disk
      2. Disk drives | QEMU HARDDISK
      3. Display adapters | Microsoft Hyper-V Video
      4. DVD/CD-ROM drives | Microsoft Virtual DVD-ROM
      5. Human Interface Devices | Microsoft Hyper-V Input
      6. Keyboards | Microsoft Hyper-V Virtual Keyboard
      7. Monitors | Generic Non-PnP Monitor
      8. Network adapters | Microsoft Hyper-V Network Adapter
      9. Processors | <Hyper-V Hosts CPU… i.e. Intel Xeon xxx>
      10. Storage controllers | Microsoft Hyper-V SCSI Controller
      11. Storage volumes | <All grayed out volumes>
      12. System devices | ACPI Module Device
      13. System devices | Advanced programmable interrupt controller
      14. System devices | Microsoft Hyper-V <crap ton, only grayed out devices>
      15. System devices | System CMOS/real time clock
    6. Reboot
  13. Verify VM is operating as expected
  14. Re IP as needed
  15. Finally migrate to final storage location (ceph, nfs, etc)
 
Last edited:
  • Like
Reactions: DDC-NL
@sienar

Thanks for a wonderful post. Ultimately it didn't work for me, but I was able to use it as starting point. I'm running 6.1, and used the following process to migrate Windows Server 2019 Gen2 VMs. Hopefully someone will find this helpful.

  1. Export Hyper-V guest from via Hyper-V Manager
  2. SCP/SFTP… vhdx to PVE server
  3. Check integrity of vhdx
    1. qemu-img check -r all disk1.vhdx
  4. Create target VM on proxmox
    1. Mirror original config as best as possible for cpu, ram, etc
    2. Only create with a 1gb IDE disk
    3. Detach, and then remove temporary 1gb IDE disk
  5. Import and convert vhd/vhdx
    1. qm importdisk <vmid> <source disk> <storage> --format qcow2
  6. Configure disk on VM
    1. Configure as IDE 0
    2. Write Back cache enabled
    3. Under options, confirm boot order
      1. IDE0
      2. CD-ROM
      3. Network
  7. Boot VM
  8. Install Drivers
    1. Mount VirtIO Driver ISO
    2. Run d:\virtio-win-gt-x64.msi
      1. Customize as needed (I used the defaults)
  9. Install Guest Agent
    1. Run D:\guest-agent\qemu-ga-x86-64.msi
    2. Dismount the VirtIO Driver ISO
  10. Shutdown the VM
  11. Switch to SCSI for OS Disk
    1. Add Temporary 1GB SCSI disk
      1. Bus/Device - SCSI 0
    2. Boot the VM
    3. Verify SCSI Driver
    4. Log into the VM
    5. Open Device Manager
      1. Expand 'Storage Controllers"
      2. Verify that "Red Hat VirtIO SCSI pass-through controller" is present
    6. Shutdown VM
    7. Remove 1gb temporary SCSI disk
    8. Detach IDE 0 disk
    9. Attach unused disk
      1. Bus/Device - SCSI 0
      2. Cache - Write back
    10. Update VM Options | Boot Order
      1. Set first boot device to - scsi0
    11. Power On
  12. Cleanup Hyper-V Devices
    1. Log into the VM
    2. Open Device Manager
    3. Show Hidden devices
    4. View | Show hidden devices
    5. Remove the following devices (Right click and select Uninstall device only fore grayed out devices):
      1. Disk drives | Microsoft Virtual Disk
      2. Disk drives | QEMU HARDDISK
      3. Display adapters | Microsoft Hyper-V Video
      4. DVD/CD-ROM drives | Microsoft Virtual DVD-ROM
      5. Human Interface Devices | Microsoft Hyper-V Input
      6. Keyboards | Microsoft Hyper-V Virtual Keyboard
      7. Monitors | Generic Non-PnP Monitor
      8. Network adapters | Microsoft Hyper-V Network Adapter
      9. Processors | <Hyper-V Hosts CPU… i.e. Intel Xeon xxx>
      10. Storage controllers | Microsoft Hyper-V SCSI Controller
      11. Storage volumes | <All grayed out volumes>
      12. System devices | ACPI Module Device
      13. System devices | Advanced programmable interrupt controller
      14. System devices | Microsoft Hyper-V <crap ton, only grayed out devices>
      15. System devices | System CMOS/real time clock
    6. Reboot
  13. Verify VM is operating as expected
  14. Re IP as needed
  15. Finally migrate to final storage location (ceph, nfs, etc)
Sorry it doesn't work, SCSI HyperV gen2 VM won't boot as pointed at step 7.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!