[SOLVED] Systems using PVSCSI not booting after installing updates

NetDream

New Member
Mar 22, 2023
6
1
3
Hello,

Yesterday I installed the latest updates from the pve-no-subscription branch.
After rebooting i discovered that 2 systems using the PVSCSI controller were not booting anymore. The systems are using the pc-i440fx-7.0 machine and OVMF (UEFI) BIOS.
Checking the BIOS it seems that the system is not detecting the Hard Disk.
The systems using the VirtIO SCSI controller are working without any problem.

We are currently running pve-qemu-kvm v 7.2.0-8.
I have tried downgrading to 7.1.0-4 without any success.

We have tried creating new machines and attached the existing hard drives, also unsuccessfully.

Only if i change the SCSI adapter to VirtIO SCSI the system attempts to boot but fails due to the lack of drivers.
When selecting the LSI controllers, the BIOS also fails to recognize the hard drives.
Both systems were running the PVSCSI driver because they were migrated from the ESXi environment by an external party.

We are running 4 PVE servers, 3 of which have the updates applied. I have tried migrating the server to each host, and the systems are booting on the server without the updates installed.

Currently the systems are running on the unpatched server, and we will try to replace the SCSI controller in the following days.
However, it seems that something in the updates has caused this issue to appear.
 
Hi,
have you tried to use a Machine version: 7.1 instead of the latest one?
 
Hi Chris,

Thank you for your reply.
I have tried several machine versions, including 7.2, 7.1. 7.0 and 6.2, all resulting in the same behavior.
It seems that the hard drives are not recognized, the following screenshots have been made using Machine version 7.1:


This is the Boot Manager Menu using the PVSCSI controller:
1679499907205.png


And this is the Boot Manager Menu using the VirtIO controller:
1679499977347.png
 
Hi,
this seems to be a regression with pve-edk2-firmware. Can you try apt install pve-edk2-firmware=3.20220526-1 and see if it works then? Another workaround might be attaching the disk on a different bus than SCSI.
 
Hell Fiona,

I will try this evening, I won't be able to reboot the host until this evening.
 
You don't need to reboot the host after installing the package. Or do you mean the VM by "host" ;)?
 
I mean the Proxmox hypervisor, would it be possible to install the firmware on the live system?
 
I mean the Proxmox hypervisor, would it be possible to install the firmware on the live system?
Yes. After installing the package, it will be used for new VM boots (does not apply to reboots within the guest). No need to reboot the host.
 
Hi Fiona,

Just tried downgrading and it seems to work.
Running apt install pve-edk2-firmware=3.20220526-1 allows the machine to detect the SCSI drive and attempt to boot.
The system now crashes on boot, but it's possible that is related to some troubleshooting we did yesterday, I will try to restore a back-up and let you know if the system boots.
 
  • Like
Reactions: leesteken
A fix is now available with pve-edk2-firmware=3.20221111-2 on the no-subscription repository. Upstream disabled the support by default, because there is no active maintainer anymore. So in the long term, you still might want to switch away to a different SCSI controller.
 
I always change the controllers to VirtIO for VMware or HyperV migrations. The performance is significantly better.
If you have a Windows VM with only 1 disk, simply boot IDE, SATA or PVSCSI if necessary and attach a second small disk as VirtIO. Then install the driver, shut down the VM, delete the small disk and set the boot disk to VirtIO. Then Windows will also be able to handle it.
 
  • Like
Reactions: NetDream and fiona
I always change the controllers to VirtIO for VMware or HyperV migrations. The performance is significantly better.
If you have a Windows VM with only 1 disk, simply boot IDE, SATA or PVSCSI if necessary and attach a second small disk as VirtIO. Then install the driver, shut down the VM, delete the small disk and set the boot disk to VirtIO. Then Windows will also be able to handle it.

This is the way to go, but these 2 systems were maintained by an external party, and we didn't realize they were still using the PVSCSI.
It seems that using SATA will result in boot errors with Server 2019, but using IDE works.
 
Hi,
this seems to be a regression with pve-edk2-firmware. Can you try apt install pve-edk2-firmware=3.20220526-1 and see if it works then? Another workaround might be attaching the disk on a different bus than SCSI.
Hi Fiona,

Thank you for explaining this to the community.

Would it be possible to check if PVSCSI has been disable in the latest version of pve-edk2-firmware and re-enable it again? The reason why it could be useful is that nested ESXi could run on SCSI storage if needed. It is a useful feature for those who need to be able to test migration from ESXi and need to run it in the labs.

Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!