[SOLVED] Systems using PVSCSI not booting after installing updates

NetDream

New Member
Mar 22, 2023
6
1
3
Hello,

Yesterday I installed the latest updates from the pve-no-subscription branch.
After rebooting i discovered that 2 systems using the PVSCSI controller were not booting anymore. The systems are using the pc-i440fx-7.0 machine and OVMF (UEFI) BIOS.
Checking the BIOS it seems that the system is not detecting the Hard Disk.
The systems using the VirtIO SCSI controller are working without any problem.

We are currently running pve-qemu-kvm v 7.2.0-8.
I have tried downgrading to 7.1.0-4 without any success.

We have tried creating new machines and attached the existing hard drives, also unsuccessfully.

Only if i change the SCSI adapter to VirtIO SCSI the system attempts to boot but fails due to the lack of drivers.
When selecting the LSI controllers, the BIOS also fails to recognize the hard drives.
Both systems were running the PVSCSI driver because they were migrated from the ESXi environment by an external party.

We are running 4 PVE servers, 3 of which have the updates applied. I have tried migrating the server to each host, and the systems are booting on the server without the updates installed.

Currently the systems are running on the unpatched server, and we will try to replace the SCSI controller in the following days.
However, it seems that something in the updates has caused this issue to appear.
 
Hi,
have you tried to use a Machine version: 7.1 instead of the latest one?
 
Hi Chris,

Thank you for your reply.
I have tried several machine versions, including 7.2, 7.1. 7.0 and 6.2, all resulting in the same behavior.
It seems that the hard drives are not recognized, the following screenshots have been made using Machine version 7.1:


This is the Boot Manager Menu using the PVSCSI controller:
1679499907205.png


And this is the Boot Manager Menu using the VirtIO controller:
1679499977347.png
 
Hi,
this seems to be a regression with pve-edk2-firmware. Can you try apt install pve-edk2-firmware=3.20220526-1 and see if it works then? Another workaround might be attaching the disk on a different bus than SCSI.
 
Hell Fiona,

I will try this evening, I won't be able to reboot the host until this evening.
 
You don't need to reboot the host after installing the package. Or do you mean the VM by "host" ;)?
 
I mean the Proxmox hypervisor, would it be possible to install the firmware on the live system?
 
I mean the Proxmox hypervisor, would it be possible to install the firmware on the live system?
Yes. After installing the package, it will be used for new VM boots (does not apply to reboots within the guest). No need to reboot the host.
 
Hi Fiona,

Just tried downgrading and it seems to work.
Running apt install pve-edk2-firmware=3.20220526-1 allows the machine to detect the SCSI drive and attempt to boot.
The system now crashes on boot, but it's possible that is related to some troubleshooting we did yesterday, I will try to restore a back-up and let you know if the system boots.
 
  • Like
Reactions: leesteken
A fix is now available with pve-edk2-firmware=3.20221111-2 on the no-subscription repository. Upstream disabled the support by default, because there is no active maintainer anymore. So in the long term, you still might want to switch away to a different SCSI controller.
 
I always change the controllers to VirtIO for VMware or HyperV migrations. The performance is significantly better.
If you have a Windows VM with only 1 disk, simply boot IDE, SATA or PVSCSI if necessary and attach a second small disk as VirtIO. Then install the driver, shut down the VM, delete the small disk and set the boot disk to VirtIO. Then Windows will also be able to handle it.
 
  • Like
Reactions: NetDream and fiona
I always change the controllers to VirtIO for VMware or HyperV migrations. The performance is significantly better.
If you have a Windows VM with only 1 disk, simply boot IDE, SATA or PVSCSI if necessary and attach a second small disk as VirtIO. Then install the driver, shut down the VM, delete the small disk and set the boot disk to VirtIO. Then Windows will also be able to handle it.

This is the way to go, but these 2 systems were maintained by an external party, and we didn't realize they were still using the PVSCSI.
It seems that using SATA will result in boot errors with Server 2019, but using IDE works.
 
Hi,
this seems to be a regression with pve-edk2-firmware. Can you try apt install pve-edk2-firmware=3.20220526-1 and see if it works then? Another workaround might be attaching the disk on a different bus than SCSI.
Hi Fiona,

Thank you for explaining this to the community.

Would it be possible to check if PVSCSI has been disable in the latest version of pve-edk2-firmware and re-enable it again? The reason why it could be useful is that nested ESXi could run on SCSI storage if needed. It is a useful feature for those who need to be able to test migration from ESXi and need to run it in the labs.

Thanks!
 
Hi,
Would it be possible to check if PVSCSI has been disable in the latest version of pve-edk2-firmware and re-enable it again? The reason why it could be useful is that nested ESXi could run on SCSI storage if needed. It is a useful feature for those who need to be able to test migration from ESXi and need to run it in the labs.
it is enabled again in all versions since 4.2023.08-2. Also for the current version: https://git.proxmox.com/?p=pve-edk2...6202cf1936ac5cb95ef5516d1561134fd;hb=HEAD#l32
 
Interesting... I thought if I install a nested ESXi, it would be able to pick up the paravirtual SCSI controller, as it ships with the corresponding driver by default. Yet, it only pick up a SATA controller/disk. Has it been tested before?
Please post the VM configuration qm config <ID>. What version of ESXi did you try?
 
Please post the VM configuration qm config <ID>. What version of ESXi did you try?

Hi Fiona,

I use ESXi 7.0 Update 3n (build 21930508).

Below is the VM config output:

bios: ovmf
boot: order=scsi0;ide2;net0
cores: 6
cpu: host
efidisk0: local:103/vm-103-disk-0.vmdk,efitype=4m,pre-enrolled-keys=1,size=528K
ide2: local:iso/VMware-VMvisor-Installer-7.0U3n-21930508.x86_64.iso,media=cdrom,size=391216K
machine: q35
memory: 24576
meta: creation-qemu=9.0.2,ctime=1732147375
name: lab-n-esxi02
net0: vmxnet3=BC:24:11:1D:96:C5,bridge=vmbr0,tag=10
net1: vmxnet3=BC:24:11:FC:EC:8D,bridge=vmbr0,tag=10
numa: 1
ostype: l26
scsi0: local:103/vm-103-disk-1.vmdk,size=250G,ssd=1
scsihw: pvscsi
smbios1: uuid=0daec235-6326-418d-85cb-1b342225fa33
sockets: 2
tags: esxi;nested;vsphere
tpmstate0: local:103/vm-103-disk-2.raw,size=4M,version=v2.0
vga: vmware
vmgenid: 3753b397-da6a-4b1d-9722-76a58c167d9f

Screenshot 2024-11-21 at 11.13.32 AM.png
 
Last edited:
Hi Falk,

Thanks for the suggestion.

I tried disabling NUMA setting and it didn't help. Same empty list of disks.
I have no problem with an ESXi 8 image. With my ESXi 7 image, which I still have, only SATA works.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!