HyperV to PVE migration - Windows VM boot issues - local-lvm disk vs shared disk

ikalafat

New Member
Oct 8, 2025
3
0
1
Hi everyone,

I am in the process of migrating several HyperV VMs to PVE, and I have encountered weird behavior and Windows Server (2019 in my case) VM boot issues, depending on the storage where the VM disk is imported.

On PVE instance, I have several disks

- local-lvm
- shared storage (SAN-backed, thick LVM disk)
- SMB shared disk (for pulling the VHDXs)

When I migrate the VM/VHDx with the following command
Code:
qm importdisk <vmid> /mnt/pve/hv/<path-to-vhdx>.vhdx local-lvm
with output similar to ..
  Logical volume "vm-145-disk-1" created.
Logical volume pve/vm-145-disk-1 changed.
and progress..

the VM boots normally (after attaching the disk to SATA controller etc)

When i migrate the VM/VHDx to different/shared/SAN-backed storage
Code:
qm importdisk <vmid> /mnt/pve/hv/<path-to-vhdx>.vhdx san-vol01
with output similar to
Wiping PMBR signature on /dev/vg_san_vol03/vm-145-disk-2.
Logical volume "vm-145-disk-2" created.

and progress..

i get boot errors (Access denied to Windows Boot Manager). If i disable Secure boot, then I'm getting the
0xc0000428 ("Windows cannot verify the digital signature for this file") exception

I have also observed the following:
- after successfully booting the VM from local-lvm disks, I have migrated the disk storage with the VM being shut down - boot issues appeared
- after reverting to source disks (local-lvm) - VM boots normally
- after migrating the disk storage (again), but this time with VM running - no boot issues, even after shutdown and cold boot.

I have thought that the Wiping of PMBR signature could be the issue, however when migrating the disk (online) from local-lvm to san, i have following output
Code:
create full clone of drive scsi0 (local-lvm:vm-145-disk-0)
Wiping PMBR signature on /dev/vg_san_vol03/vm-145-disk-1.
Logical volume "vm-145-disk-1" created.
With progress (etc), and in this case the VM works as intended, so i am lost.

What can I do to prevent/avoid these boot issues without the extra step of migrating to local-lvm disks before migrating to shared disk?
FWIW: I am pretty sure that I managed to migrate 2-3 other Windows VMs directly to SAN volume without issues like this.
Running PVE 9.0.10 with all updates.

Thank you.
 
Last edited:
Hi @ikalafat, welcome to the forum.


My suspicion is that in the failure cases, the underlying disk was not properly erased, allowing the VM’s OS to read residual or invalid data. After a few conversions, the VM disk was likely thick-provisioned, the blocks overwritten, and the issue effectively “masked.”

I know there is ongoing work to improve behavior in this area, though I’m not aware of its current status.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Update.

Ran the following two commands

Code:
root@pve:~# qm importdisk 145 /mnt/pve/<path-to-vhdx>.vhdx local-lvm
importing disk '/mnt/pve/<path-to-vhdx>.vhdx' to VM 145 ...
  Logical volume "vm-145-disk-2" created.
  Logical volume pve/vm-145-disk-2 changed.

root@pve:~# qm importdisk 145 /mnt/pve/<path-to-vhdx>.vhdx san-vol03
importing disk '/mnt/pve/<path-to-vhdx>.vhdx' to VM 145 ...
  Logical volume "vm-145-disk-3" created.

After that i ran the
Code:
root@pve:~# lsblk -bno NAME,SIZE /dev/pve/vm-145-disk-2 /dev/vg_san_vol03/vm-145-disk-3
pve-vm--145--disk--2          64424509440
vg_san_vol03-vm--145--disk--3 64424509440
confirming the same size of the disks


After that i ran the
Code:
root@pve:~# dd if=/dev/pve/vm-145-disk-2 bs=1M count=1 | sha256sum
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0080444 s, 130 MB/s
7d68111462335d59997897d56dbe3add1bd767e83985938e164d492cbb5e7242  -

root@pve:~# dd if=/dev/vg_san_vol03/vm-145-disk-3 bs=1M count=1 | sha256sum
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00945368 s, 111 MB/s
34860f60d893813ac534d07225b0c34eeb332acd10118a7124975994104b6f3f  -
notice the difference in hash

then I compared the 2nd megabyte
Code:
root@pve:/tmp# dd if=/dev/vg_san_vol03/vm-145-disk-3 bs=1M count=2 skip=1 | sha256sum
2+0 records in
2+0 records out
2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0157643 s, 133 MB/s
d5f566f76c735c2fffdeaf15165ab2a22cf145dc11fbe3e802dbb58896614393  -

root@pve:/tmp# dd if=/dev/pve/vm-145-disk-2 bs=1M count=2 skip=1 | sha256sum
2+0 records in
2+0 records out
2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0156917 s, 134 MB/s
d5f566f76c735c2fffdeaf15165ab2a22cf145dc11fbe3e802dbb58896614393  -
hash is equal

after that, I've created hexdump of the first megabyte

Code:
root@pve:/tmp# dd if=/dev/pve/vm-145-disk-2 bs=1M count=1 | hexdump -C > /tmp/disk1.hex
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595855 s, 176 MB/s
root@pve:/tmp# dd if=/dev/vg_san_vol03/vm-145-disk-3 bs=1M count=1 | hexdump -C > /tmp/disk2.hex
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00661976 s, 158 MB/s

finally, I have diffed that 1st megabyte

Code:
root@pve:/tmp# diff -u /tmp/disk1.hex /tmp/disk2.hex
--- /tmp/disk1.hex      2025-10-09 16:05:28.577787313 +0200
+++ /tmp/disk2.hex      2025-10-09 16:05:49.135679549 +0200
@@ -25,7 +25,7 @@
 00000180  20 6c 6f 61 64 69 6e 67  20 6f 70 65 72 61 74 69  | loading operati|
 00000190  6e 67 20 73 79 73 74 65  6d 00 4d 69 73 73 69 6e  |ng system.Missin|
 000001a0  67 20 6f 70 65 72 61 74  69 6e 67 20 73 79 73 74  |g operating syst|
-000001b0  65 6d 00 00 00 63 7b 9a  00 00 00 00 00 00 00 00  |em...c{.........|
+000001b0  65 6d 00 00 00 63 7b 9a  00 00 00 00 39 e4 00 00  |em...c{.....9...|
 000001c0  02 00 ee fe ff 7a 01 00  00 00 ff ff ff ff 00 00  |.....z..........|
 000001d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 *

Not sure how to interpret this difference - 0000 for local-lvm vs 39e4 for san-backed storage.

Also ran the following

Code:
root@pve:~# qemu-img compare /dev/pve/vm-145-disk-2 /dev/vg_san_vol03/vm-145-disk-3
Content mismatch at offset 0!
 
Last edited: