Running PVE 9.1 with ISCSI Dell ME4024
So it all started (I realize I probably should have left well enough alone, lesson learned), when I switched the disk type from scsi to virtio. Using virtio-scsi-single controller. At the time I had virtio .285 drivers installed. I've seen that .285 drivers can have issues with virtio. Had a few issues with BSOD, even corrupted a DC to the point I removed from the domain and built a new one. Was only running AD services and DNS so not a big deal. Loaded a fresh copy of server 22 with virtio .285 drivers. Had some blue screen issues, uninstalled .285 and installed .271 drivers. I also ended up changing the drive back to scsi. I'm still getting random crashes. Windows logs just show the system shut down unclean unless I should be looking elsewhere. What can I look at to narrow down this issue and pinpoint what the problem is. I've where windows may not have initialized the drivers correctly since the disk switch.
I don't feel like my backend storage is having issues as I have other VMs running on the 3 node cluster without issue. I also migrated the VM to another node to try to rule that out. My gut is telling me a driver issue related to the disk change. Just however don't know the best way to prove and correct the issue.
So it all started (I realize I probably should have left well enough alone, lesson learned), when I switched the disk type from scsi to virtio. Using virtio-scsi-single controller. At the time I had virtio .285 drivers installed. I've seen that .285 drivers can have issues with virtio. Had a few issues with BSOD, even corrupted a DC to the point I removed from the domain and built a new one. Was only running AD services and DNS so not a big deal. Loaded a fresh copy of server 22 with virtio .285 drivers. Had some blue screen issues, uninstalled .285 and installed .271 drivers. I also ended up changing the drive back to scsi. I'm still getting random crashes. Windows logs just show the system shut down unclean unless I should be looking elsewhere. What can I look at to narrow down this issue and pinpoint what the problem is. I've where windows may not have initialized the drivers correctly since the disk switch.
I don't feel like my backend storage is having issues as I have other VMs running on the 3 node cluster without issue. I also migrated the VM to another node to try to rule that out. My gut is telling me a driver issue related to the disk change. Just however don't know the best way to prove and correct the issue.