Windows Server 2022 boots with "VMware PVSCSI" but not with "VirtIO SCSI" despite installing drivers

EpicLPer

Member
Sep 7, 2022
47
7
8
29
Austria
epiclper.com
Heya,

I've just successfully migrated over my ESXi server to Proxmox and moved all VMs back onto that host, however I'm having a slight problem now.
The server boots just fine when setting the SCSI controller to "VMware PVSCSI" but just Bluescreens/throws me into recovery when choosing VirtIO SCSI (single). Prior to migration I've installed the VirtIO drivers and removed VMware Tools, however it seems there are still some remains of the VMware drivers which cause interference? Not sure...

One thing I tried was booting into recovery mode and adding them there via drvload first and then pnputil (dism didn't want to work) and I was then able to see the C: drive and access it, however rebooting the server behaved the same again.
I also tried the steps from someone else who said to add a 2GB VirtIO Block device, that shows up just fine in Windows itself and I could format it, no driver installs needed anymore.

Has anyone encountered this before already? Just gonna stay with VMware PVSCSI for now, but want to switch over eventually for better performance (and potential other features). If it really is the remains of the old SCSI drivers how would I get rid of them in a clean way?

Thanks!
 
step 1. add a SECOND scsi bus to the machine, type virtio
step 2. add a temporary virtual disk to the SECOND scsi bus.
step 3. boot to windows. windows will detect the new bus. If its showing as an unknown device, install the virtio driver.
step 4. once windows sees the temporary drive in disk management, you are ready to shut down.
step 5. remove the temporary virtual drive, detach the actual virtual drive, and remove the vmware pvscsi hba. you can now attach your original virtual drive to the remaining virtio SCSI hba.
step 6. reset the boot order to mark your drive as the boot device (options-boot order.)

voila! machine should boot normally now.
 
step 1. add a SECOND scsi bus to the machine, type virtio
step 2. add a temporary virtual disk to the SECOND scsi bus.
step 3. boot to windows. windows will detect the new bus. If its showing as an unknown device, install the virtio driver.
step 4. once windows sees the temporary drive in disk management, you are ready to shut down.
step 5. remove the temporary virtual drive, detach the actual virtual drive, and remove the vmware pvscsi hba. you can now attach your original virtual drive to the remaining virtio SCSI hba.
step 6. reset the boot order to mark your drive as the boot device (options-boot order.)

voila! machine should boot normally now.
Heya, sorry for the late reply, was a bit busy the last days.

I've tried all of these steps, put the boot and second drive as "SATA" meanwhile and added an additional test-disk via VirtIO SCSI Single which shows up just fine and is also accessible, so the drivers are working fine.
However once I move the boot disk over to that controller it does try to boot, shows the Windows loading circle but after a few seconds still displays "Inaccessible Boot Device".

I've also tried to remove all remanentes of VMware drivers (except for the VMware Display driver as it refused to be removed for some reason), but it still threw the same error.
 
before we just give up, lets have a look at yout vmid.conf.
Current config is:

Code:
agent: 1
bios: ovmf
boot: order=sata0
cores: 8
cpu: host
efidisk0: local-lvm:vm-100-disk-1,size=4M
hostpci0: 0000:02:00.0,pcie=1,x-vga=1
hostpci1: 0000:02:00.1,pcie=1
ide2: local:iso/virtio-win.iso,media=cdrom,size=715188K
machine: pc-q35-8.1
memory: 20480
meta: creation-qemu=8.1.5,ctime=1711913379
name: WinServer2022-1
net0: e1000=XXXXX,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win11
parent: VorVMWARERemoval
sata0: local-lvm:vm-100-disk-0,cache=writeback,discard=on,size=95G,ssd=1
sata1: HDD1-4TB-thin:vm-100-disk-0,cache=writeback,discard=on,size=960G
scsihw: virtio-scsi-single
smbios1: uuid=XXXXX
sockets: 1
vga: vmware
vmgenid: XXXXX
 
I meant the config for the VM when it doesnt boot.
So... this is kind of weird now but... good?
I just recreated the "not booting state" again just to get the config and then... suddenly it booted up. I've set everything as it was before, detached the SATA disks, reattached them as SCSI which didn't work before and now suddenly it boots.

I guess this is the "Windows magic" once again :oops:

Anyways, just to be sure here's the "previously unbootable config":
Code:
agent: 1
bios: ovmf
boot: order=scsi0
cores: 8
cpu: host
efidisk0: local-lvm:vm-100-disk-1,size=4M
hostpci0: 0000:02:00.0,pcie=1,x-vga=1
hostpci1: 0000:02:00.1,pcie=1
ide2: local:iso/virtio-win.iso,media=cdrom,size=715188K
machine: pc-q35-8.1
memory: 20480
meta: creation-qemu=8.1.5,ctime=1711913379
name: WinServer2022-1
net0: e1000=XXXXX,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win11
scsi0: local-lvm:vm-100-disk-0,cache=writeback,discard=on,iothread=1,size=95G,ssd=1
scsi1: HDD1-4TB-thin:vm-100-disk-0,cache=writeback,discard=on,iothread=1,size=960G
scsihw: virtio-scsi-single
smbios1: uuid=XXXXX
sockets: 1
vga: vmware
vmgenid: XXXXX
 
might be worth comparing the current config the above and see whats different. Only note I have is do not enable discard on HDD backed datastores or you will experience intermittent slow performance. instead, trim your disk on a scheduled basis (like in the middle of the night)
 
might be worth comparing the current config the above and see whats different. Only note I have is do not enable discard on HDD backed datastores or you will experience intermittent slow performance. instead, trim your disk on a scheduled basis (like in the middle of the night)
Oh, didn't know that about Discard :)
I've already enabled TRIM on the PVE host via systemctl enable fstrim.timer and systemctl start fstrim.timer, hope that's enough? Or do I also have to set up TRIM inside the VM?
 
apologies, I think I'm sending you down the wrong rabbit hole.

fstrim on the host (pve) will trim filesystems accessible to it. your windows guest's filesystem is not.
you do need to enable discard AND ssd emulation for the guest to properly issue trim instructions to the underlying storage, which should work fine. the scheduling of trim commands has to be manipulated inside the guest, since it assumes the disk is SSD backed. Windows isnt my specialty, but I think this may lead you down the right path: https://www.howtogeek.com/257196/ho...enabled-for-your-ssd-and-enable-it-if-it-isnt
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!