VMware to Proxmox Migration - SCSI Controller

fettmasta

New Member
Feb 9, 2026
10
3
3
Hello,

We are currently testing migrating our VMware Dev environment to Proxmox, we are using Veeam to migrate VMs and all seems to work well, the one thing that has been hit or miss for me is changing the SCSI Controller from VMware PVSCSI to VirtIO SCSI Single - I get blue screen on our windows devices saying boot device not accessible.

Does anyone have a clean way to do this? The guide here is not clear, and does not seem to work for us. What is the best way to get this working once the VM has migrated to Proxmox? We will need to do this for windows servers 2016+ and Linux RHEL 8.10+.
 
1. You need to install virtio drivers before migrating a VM. Unfortunately, this will still not allow to boot a VM if boot disk is SCSI.

2. The easiest way is to select one existing drive(at least one) to be SCSI and others to be SATA, but not the boot disk !

3. When WIndows boots, it will detect a new disk. In our case, we had to make it online and non-read-only. We have used powershell script to make drives online and writable.

4. After it is detected, power off a VM and make all of them SCSI.

Now Windows VM will boot properly. There are other ways of doing it also. Make sure to use automation. We have automated everything, so an external python script makes 3 reboots, taking into consideration what I have written above. We have powershell scripts that automate IP address reconfiguration+DNS, bring disks online ...
 
1. You need to install virtio drivers before migrating a VM. Unfortunately, this will still not allow to boot a VM if boot disk is SCSI.

2. The easiest way is to select one existing drive(at least one) to be SCSI and others to be SATA, but not the boot disk !

3. When WIndows boots, it will detect a new disk. In our case, we had to make it online and non-read-only. We have used powershell script to make drives online and writable.

4. After it is detected, power off a VM and make all of them SCSI.

Now Windows VM will boot properly. There are other ways of doing it also. Make sure to use automation. We have automated everything, so an external python script makes 3 reboots, taking into consideration what I have written above. We have powershell scripts that automate IP address reconfiguration+DNS, bring disks online ...
Just to clarify step (2), we should select one drive to be SCSI (not the boot drive...?) and everything else SATA? So boot drive would be SATA?
 
1. You need to install virtio drivers before migrating a VM. Unfortunately, this will still not allow to boot a VM if boot disk is SCSI.

2. The easiest way is to select one existing drive(at least one) to be SCSI and others to be SATA, but not the boot disk !

3. When WIndows boots, it will detect a new disk. In our case, we had to make it online and non-read-only. We have used powershell script to make drives online and writable.

4. After it is detected, power off a VM and make all of them SCSI.

Now Windows VM will boot properly. There are other ways of doing it also. Make sure to use automation. We have automated everything, so an external python script makes 3 reboots, taking into consideration what I have written above. We have powershell scripts that automate IP address reconfiguration+DNS, bring disks online ...
We did install virtio drivers and qemu guest on these windows hosts before they were migrated to Proxmox, I can boot PVSCSI with scsi disks, just not virtio scsi single. That is my only issue
 
We did install virtio drivers and qemu guest on these windows hosts before they were migrated to Proxmox, I can boot PVSCSI with scsi disks, just not virtio scsi single. That is my only issue
You cannot boot Virtio SCSI, initially. If you have multiple drives, select at least one of the to be SCSI(not bootable!) and boot disk needs to be SATA, others really does not matter.

If you have only one disk, you need to add a "fake" disk. When Windows detects it, it will load Virtio SCSI driver and at the next boot, bootable disk can be SCSI.

As I said there are other ways of doing this. I I think this is the easiest. You can automate all this. Especially if you are migrating 50+ VMs.

If you are using Proxmox Importer tool, just select option "Prepare for Virtio". It will select the proper controller SCSI single and make all disks SATA. But you still need correct this, to have at least one SCSI disk.
 
Last edited:
  • Like
Reactions: fettmasta
We do the following:
  • Remove VM Tools and power down
  • Import the VM leaving the disks as SATA
  • Create a 1GB SCSI disk
    Power on the VM, install the Guest Tools and Agent. Power down,
  • Remove / Delete the 1GB SCSI Disk
  • Detach the OS Disk (SATA). Attach the Disk as SCSI
  • Change the Boot Device in Options
  • Start the VM
 
  • Like
Reactions: fettmasta
We do the following:
  • Remove VM Tools and power down
  • Import the VM leaving the disks as SATA
  • Create a 1GB SCSI disk
    Power on the VM, install the Guest Tools and Agent. Power down,
  • Remove / Delete the 1GB SCSI Disk
  • Detach the OS Disk (SATA). Attach the Disk as SCSI
  • Change the Boot Device in Options
  • Start the VM
And this process is done while having the controller set as VIrtIO SCSI Single for the VM in question?
 
Great! That worked. My issue was I started this testing on SQL servers with multiple disks.

Change SCSI Controller to Virtio SCSI Single
Change boot disk to SATA
Detach and re-attached a random disk as SCSI under the new controller
Booted up and brought the random SCSI disk online. Shut down and changed boot disk to SCSI.
All is well now.

Thank you all!
 
After spending considerable time testing different approaches, I finally have a validated procedure in my case for switching migrated Windows Server VMs from VMware PVSCSI to VirtIO SCSI Single in Proxmox VE 9.x with OVMF/UEFI.

Environment:
  • Proxmox VE 9.1, cluster of 3x Dell PowerEdge R750
  • Windows Server 2019, 2022, 2025
  • VMs migrated via Veeam B&R from VMware VCF and tested using Proxmox Migrate Tool from ESXi
  • Machine type: pc-i440fx (important — do NOT change to Q35)

The key insight nobody clearly documents:bcdedit /set "{current}" safeboot minimal before changing the controller is mandatory. Without it you always get INACCESSIBLE_BOOT_DEVICE (0x7B). Installing VirtIO drivers inside Windows is not enough — Windows needs to boot successfully with a VirtIO SCSI device present at least once to mark the driver as boot-critical.

Validated procedure:
  1. Boot VM normally (still on PVSCSI) — verify it works
  2. Shut down VM cleanly
  3. Mount virtio-win ISO on CD-ROM, start VM
  4. Install virtio-win-gt-x64.msi — full install
  5. Configure IP (If the RDP access is needed)
  6. Remove VMware Tools (If it is not done yet)
  7. Run: bcdedit /set "{current}" safeboot minimal or go to msconfig and enable safeboot
  8. Shut down cleanly from Windows
  9. In Proxmox: SCSI Controller → VirtIO SCSI Single
  10. Detach boot disk → reattach as sata0 (not scsi0)
  11. Boot order → sata0 first
  12. Boot VM → starts over Safe Mode automatically
  13. In Safe Mode: msconfig → Boot → uncheck Safe boot → Restart or powershell: bcdedit /deletevalue "{current}" safeboot
  14. VM boots normally with VirtIO SCSI Single
Important notes:
  • Keep machine type as pc-i440fx — changing to Q35 breaks boot due to EFI NVRAM storing old PCI paths from VMware hardware
  • Final disk stays on sata0 — this is correct and permanent for Proxmox
  • Tested on Windows Server 2019, 2022 and 2025

Hope this saves someone the hours we spent on this. Happy to answer questions.
 
After spending considerable time testing different approaches, I finally have a validated procedure in my case for switching migrated Windows Server VMs from VMware PVSCSI to VirtIO SCSI Single in Proxmox VE 9.x with OVMF/UEFI.

Environment:
  • Proxmox VE 9.1, cluster of 3x Dell PowerEdge R750
  • Windows Server 2019, 2022, 2025
  • VMs migrated via Veeam B&R from VMware VCF and tested using Proxmox Migrate Tool from ESXi
  • Machine type: pc-i440fx (important — do NOT change to Q35)

The key insight nobody clearly documents:bcdedit /set "{current}" safeboot minimal before changing the controller is mandatory. Without it you always get INACCESSIBLE_BOOT_DEVICE (0x7B). Installing VirtIO drivers inside Windows is not enough — Windows needs to boot successfully with a VirtIO SCSI device present at least once to mark the driver as boot-critical.

Validated procedure:
  1. Boot VM normally (still on PVSCSI) — verify it works
  2. Shut down VM cleanly
  3. Mount virtio-win ISO on CD-ROM, start VM
  4. Install virtio-win-gt-x64.msi — full install
  5. Configure IP (If the RDP access is needed)
  6. Remove VMware Tools (If it is not done yet)
  7. Run: bcdedit /set "{current}" safeboot minimal or go to msconfig and enable safeboot
  8. Shut down cleanly from Windows
  9. In Proxmox: SCSI Controller → VirtIO SCSI Single
  10. Detach boot disk → reattach as sata0 (not scsi0)
  11. Boot order → sata0 first
  12. Boot VM → starts over Safe Mode automatically
  13. In Safe Mode: msconfig → Boot → uncheck Safe boot → Restart or powershell: bcdedit /deletevalue "{current}" safeboot
  14. VM boots normally with VirtIO SCSI Single
Important notes:
  • Keep machine type as pc-i440fx — changing to Q35 breaks boot due to EFI NVRAM storing old PCI paths from VMware hardware
  • Final disk stays on sata0 — this is correct and permanent for Proxmox
  • Tested on Windows Server 2019, 2022 and 2025

Hope this saves someone the hours we spent on this. Happy to answer questions.
Hey ZargyKz, I hope this adds some clairty to the process I have been using in a larger enterprise environment. This could help you down the road with performance and easier migrations.

Your process works, but it’s more complicated than it needs to be and leaves performance on the table.


Main differences in approach:

Driver handling

  • If you install virtio-win before migration, you don’t need Safe Mode at all.
  • The bcdedit safeboot step is just compensating for Windows not having initialized the driver as boot-critical.

Simpler method (what we’ve validated):
  • Install VirtIO drivers on the VMware VM before migration
  • Migrate with Veeam → Proxmox
  • First boot:
    • Keep OS disk on SATA temporarily
    • Add a small (1 GB) disk on VirtIO SCSI
  • Boot Windows:
    • Initialize/online that disk → loads vioscsi
  • Shutdown
  • Move OS disk to scsi0 (VirtIO SCSI)
  • Remove temp disk
  • Boot normally (no Safe Mode needed)

Important correction on SATA
“Final disk stays on sata0 — correct and permanent”
This is not correct from a performance perspective.
  • SATA (AHCI emulation):
    • Higher CPU overhead
    • No multiqueue
    • Lower IOPS / higher latency
  • VirtIO SCSI:
    • Paravirtualized
    • Supports multiqueue + iothreads
    • Much better under load (especially SQL / shared storage like iSCSI/NFS)
SATA is fine as a temporary boot bridge, but OS disks should end up on VirtIO SCSI.


Machine type
  • Agree on not switching from i440fx → Q35 mid-migration (EFI NVRAM issue)
  • Not a VirtIO limitation, just how firmware stores PCI paths
  • Q35 is better long-term, but not worth changing during migration

Recommended final config
  • OS disk on VirtIO SCSI (scsi0)
  • iothread=1
  • Enable multiqueue if needed
  • discard=on / ssd=1 if backed by flash

Bottom line
  • Safe Mode isn’t required if drivers are handled earlier
  • SATA should not be the final state
  • The small VirtIO disk method is cleaner and keeps proper performance characteristics intact
 
  • Like
Reactions: Johannes S
Simpler method (what we’ve validated):
  • Install VirtIO drivers on the VMware VM before migration
  • Migrate with Veeam → Proxmox
  • First boot:
    • Keep OS disk on SATA temporarily
    • Add a small (1 GB) disk on VirtIO SCSI
  • Boot Windows:
    • Initialize/online that disk → loads vioscsi
  • Shutdown
  • Move OS disk to scsi0 (VirtIO SCSI)
  • Remove temp disk
  • Boot normally (no Safe Mode needed)
Too many steps for my liking, I prefer the method described here: https://www.croit.io/blog/migrate-windows-vms-to-proxmox-ve