[SOLVED] UEFI Boot Display suddenly not initializing (yet?)

scupitt

New Member
Oct 1, 2024
5
0
1
Hello all!

I'm having a bear of an issue trying to figure out why my windows server VM suddenly went offline. For context, I'd been having a BlueScreening issue as a result of using iSCSI, in an attempt to fix it my colleague unchecked the 'Use LUNs Directly' option (since we run 3 or 4 VMs off of iSCSI), and just like that....
Screenshot 2024-10-01 at 00.41.20.png

Now, naturally, the first step was to undo the setting change we just made, however even after reboots of the whole node and cluster nothing changed. My next step was to see if I could get something to boot off of a BIOS and I could! When I switch to SeaBIOS without changing anything I can manage to get the windows installer disc to pop up, and in fact I loaded up an iso of Ubuntu to verify that the SCSI drive is still being detected as bootable, and the partitioning and file content still looks correct.

I've been scouring the forums trying to see if there's a fix but I can't seem to find one, from changing the machine type, to changing hotplug elements, I'm drawing a blank lol. Here's some screenshots of the machine configuration, any help is VERY appreciated and I can provide any information necessary!

Screenshot 2024-10-01 at 00.50.08.pngScreenshot 2024-10-01 at 00.49.26.png

For reference, all of the other VMs on my cluster are working fine, including a Windows 11 VM on another node using UEFI & scsi and a VM on the local machine that uses scsi (for some examples). Looks like the other windows machine has about the same config:
Screenshot 2024-10-01 at 00.55.14.png
 
Two things I notice (only on the none-functioning VM):

1. You have no TPM (AFAIK this is required for Windows 11).
2. You are using an old version (5.1) of i440fx on the VM - try a more recent version.
 
Two things I notice (only on the none-functioning VM):

1. You have no TPM (AFAIK this is required for Windows 11).
2. You are using an old version (5.1) of i440fx on the VM - try a more recent version.
Hi yes,

I added a TPM State drive and it's still not booting up
I switched the machine state from version 9 to 5.1 as debugging, I put it back to 9 when I tried TPM.. for some reason the 'latest' option isn't showing up but that's no big deal hopefully
 
A Clue

Code:
Oct 01 01:54:13 pve pvedaemon[1169]: <root@pam> starting task UPID:pve:00351B65:0262E476:66FB9C95:qmstart:104:root@pam:
Oct 01 01:54:14 pve systemd[1]: Started 104.scope.
Oct 01 01:54:15 pve kernel: tap104i0: entered promiscuous mode
Oct 01 01:54:15 pve kernel: vmbr0: port 2(fwpr104p0) entered blocking state
Oct 01 01:54:15 pve kernel: vmbr0: port 2(fwpr104p0) entered disabled state
Oct 01 01:54:15 pve kernel: fwpr104p0: entered allmulticast mode
Oct 01 01:54:15 pve kernel: fwpr104p0: entered promiscuous mode
Oct 01 01:54:15 pve kernel: tg3 0000:03:00.1 eno2: entered promiscuous mode
Oct 01 01:54:15 pve kernel: vmbr0: port 2(fwpr104p0) entered blocking state
Oct 01 01:54:15 pve kernel: vmbr0: port 2(fwpr104p0) entered forwarding state
Oct 01 01:54:15 pve kernel: fwbr104i0: port 1(fwln104i0) entered blocking state
Oct 01 01:54:15 pve kernel: fwbr104i0: port 1(fwln104i0) entered disabled state
Oct 01 01:54:15 pve kernel: fwln104i0: entered allmulticast mode
Oct 01 01:54:15 pve kernel: fwln104i0: entered promiscuous mode
Oct 01 01:54:15 pve kernel: fwbr104i0: port 1(fwln104i0) entered blocking state
Oct 01 01:54:15 pve kernel: fwbr104i0: port 1(fwln104i0) entered forwarding state
Oct 01 01:54:15 pve kernel: fwbr104i0: port 2(tap104i0) entered blocking state
Oct 01 01:54:15 pve kernel: fwbr104i0: port 2(tap104i0) entered disabled state
Oct 01 01:54:15 pve kernel: tap104i0: entered allmulticast mode
Oct 01 01:54:15 pve kernel: fwbr104i0: port 2(tap104i0) entered blocking state
Oct 01 01:54:15 pve kernel: fwbr104i0: port 2(tap104i0) entered forwarding state
Oct 01 01:54:16 pve kernel: tap104i1: entered promiscuous mode
Oct 01 01:54:16 pve kernel: vmbr1: port 2(fwpr104p1) entered blocking state
Oct 01 01:54:16 pve kernel: vmbr1: port 2(fwpr104p1) entered disabled state
Oct 01 01:54:16 pve kernel: fwpr104p1: entered allmulticast mode
Oct 01 01:54:16 pve kernel: fwpr104p1: entered promiscuous mode
Oct 01 01:54:16 pve kernel: vmbr1: port 2(fwpr104p1) entered blocking state
Oct 01 01:54:16 pve kernel: vmbr1: port 2(fwpr104p1) entered forwarding state
Oct 01 01:54:16 pve kernel: fwbr104i1: port 1(fwln104i1) entered blocking state
Oct 01 01:54:16 pve kernel: fwbr104i1: port 1(fwln104i1) entered disabled state
Oct 01 01:54:16 pve kernel: fwln104i1: entered allmulticast mode
Oct 01 01:54:16 pve kernel: fwln104i1: entered promiscuous mode
Oct 01 01:54:16 pve kernel: fwbr104i1: port 1(fwln104i1) entered blocking state
Oct 01 01:54:16 pve kernel: fwbr104i1: port 1(fwln104i1) entered forwarding state
Oct 01 01:54:16 pve kernel: fwbr104i1: port 2(tap104i1) entered blocking state
Oct 01 01:54:16 pve kernel: fwbr104i1: port 2(tap104i1) entered disabled state
Oct 01 01:54:16 pve kernel: tap104i1: entered allmulticast mode
Oct 01 01:54:16 pve kernel: fwbr104i1: port 2(tap104i1) entered blocking state
Oct 01 01:54:16 pve kernel: fwbr104i1: port 2(tap104i1) entered forwarding state
Oct 01 01:54:16 pve pvedaemon[1169]: <root@pam> end task UPID:pve:00351B65:0262E476:66FB9C95:qmstart:104:root@pam: OK
Oct 01 01:54:18 pve corosync-qdevice[1079]: Connect timeout
Oct 01 01:54:18 pve corosync-qdevice[1079]: Can't connect to qnetd host. (-5986): Network address not available (in use?)
Oct 01 01:54:26 pve corosync-qdevice[1079]: Connect timeout
Oct 01 01:54:26 pve corosync-qdevice[1079]: Can't connect to qnetd host. (-5986): Network address not available (in use?)
Oct 01 01:54:34 pve corosync-qdevice[1079]: Connect timeout
Oct 01 01:54:34 pve corosync-qdevice[1079]: Can't connect to qnetd host. (-5986): Network address not available (in use?)
Oct 01 01:54:42 pve corosync-qdevice[1079]: Connect timeout
Oct 01 01:54:42 pve corosync-qdevice[1079]: Can't connect to qnetd host. (-5986): Network address not available (in use?)
 
Assuming the VirtIO drivers were already installed in the Windows 11 environment (otherwise that VirtIO SCSI controller would not work) why don't you use the VirtIO network devices (like in the working VM) instead of the e1000.

One other thing you could also try - is using the x86-64-v2-AES Processor type as in the other VM (instead of host).
 
Assuming the VirtIO drivers were already installed in the Windows 11 environment (otherwise that VirtIO SCSI controller would not work) why don't you use the VirtIO network devices (like in the working VM) instead of the e1000.

One other thing you could also try - is using the x86-64-v2-AES Processor type as in the other VM (instead of host).
So the one that's broken is actually running Windows Server

I changed it over to a VirtIO device and changed the processor type, however still nothing

It also still won't boot to the ISO images (windows installer & ubuntu) that I have stored locally on the pve server in UEFI, which to me means that even if the SCSI drivers magically had been uninstalled or even if the SCSI drive was having a failed connection or something like that it should still put out a display of something.
 
It also still won't boot to the ISO images (windows installer & ubuntu) that I have stored locally on the pve server in UEFI
Looks like your EFI disk is not functioning/corrupt. Try removing/readding , turn off pre-enrolled keys.
 
Looks like your EFI disk is not functioning/corrupt. Try removing/readding , turn off pre-enrolled keys.
This was it! I added te EFI disk to local storage so *hopefully* it won't depend on anything. Machine booted up immediately. Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!