UEFI PXE Boot Issues After Upgrading from Proxmox VE 8.3.4 to 8.3.5

finsel

New Member
Apr 2, 2025
6
1
3
Hello Proxmox Community. First post here!

I'm encountering some significant issues after upgrading my Proxmox VE host/cluster from version 8.3.4 to 8.3.5 and would appreciate any insights or guidance.

Environment:
  • Proxmox VE version (before): 8.3.4
  • Proxmox VE version (after): 8.3.5
  • VM BIOS Type: OVMF (UEFI)
  • Network Card Model (tested): Intel 1000 (primarily), potentially others if relevant.

Working Scenario (Proxmox VE 8.3.4): Before the upgrade, UEFI PXE booting for my VMs worked perfectly. I could:
  1. Set the network card (netX) as the first boot device in the VM's Options > Boot Order.
  2. Alternatively, manually select the "Network Boot" option from within the VM's UEFI boot menu (accessed via ESC during startup).Both methods successfully initiated the PXE boot process.

Problem Description (After Upgrading to Proxmox VE 8.3.5): Immediately after upgrading to 8.3.5, the following issues appeared:
  1. UEFI PXE Boot Failure:
    • Setting the network card as the first boot device in Proxmox Boot Order no longer triggers a PXE boot attempt. The VM seems to skip it and moves to the next device.
    • More significantly, the Network Boot option has completely disappeared from the list of bootable devices within the VM's internal UEFI menu. It's no longer available for manual selection.
  2. Software-Defined Network (SDN) Malfunction:
    • Alongside the PXE boot issue, our configured Proxmox SDN setup has stopped working correctly after the upgrade.
    • Consequently, VMs that rely on this SDN configuration are also malfunctioning.

Troubleshooting Done So Far:
  • Verified VM configuration (Boot Order, UEFI settings).
  • Rebooted the Proxmox host(s).
  • Checked basic network connectivity for the host.

Questions:
  1. Is this a known regression or bug introduced in Proxmox VE 8.3.5?
  2. Has anyone else experienced similar issues with UEFI PXE boot or SDN breaking after this specific upgrade?
  3. Are there any specific logs (QEMU, pvedaemon, SDN logs, kernel logs) I should check for relevant errors?
  4. Could the UEFI PXE issue and the SDN issue be related, perhaps due to changes in the underlying network stack or related packages (QEMU, EDK2 firmware)?
  5. Are there any potential workarounds or fixes available?
I suspect these issues might be related, potentially stemming from a change introduced between 8.3.4 and 8.3.5 affecting network initialization or virtual firmware/hardware interaction.

Any help or pointers would be greatly appreciated. I'm happy to provide relevant configuration files or log snippets if needed (e.g., pveversion -v, VM config, SDN config, network config).

Thank you!
 
Hello everyone,
I am experiencing the same issue. PXE boot is no longer working on any of my VMs.
Could someone please help?
Thank you!
 
I just tried to roll back to the Kernel 6.8.12-9-pve and upgrade to 6.11.11-2-pve, same problem.
EDIT : 6.14.0-1-pve, same issue.
 
Last edited:
Regarding the network boot with OVMF:
If I would have that problem and would be open for testing, I would try to respectively with downgrading the pve-edk2-firmware(s) (from 4.2025.02-2 to 4.2023.08-4). (Provided, obviously, I would have updated it to 4.2025.02-2 recently.)
 
I just tried a downgrade to 4.2023.08-4, and the problem remains the same (even after a reboot):

Bash:
apt install pve-edk2-firmware=4.2023.08-4
apt list pve-edk2-firmware -a
Listing... Done
pve-edk2-firmware/stable 4.2025.02-2 all [upgradable from: 4.2023.08-4]
pve-edk2-firmware/stable 4.2025.02-1 all
pve-edk2-firmware/stable,now 4.2023.08-4 all [installed,upgradable to: 4.2025.02-2]
pve-edk2-firmware/stable 4.2023.08-3 all
pve-edk2-firmware/stable 4.2023.08-2 all
pve-edk2-firmware/stable 4.2023.08-1 all
pve-edk2-firmware/stable 3.20230228-4 all
 
Last edited:
  • Is pve-edk2-firmware-ovmf and pve-edk2-firmware-legacy also downgraded?
  • Was the VM completely stopped and started again after the downgrade?
  • If yes to both, then maybe try with a new EFI disk (after the downgrade).

Otherwise I have no further idea and my suspicion was likely wrong, sorry.
 
could you please post
- the package versions where it still worked (you can check /var/log/apt/history.log and post the entry of the upgrade that broke it)
- the VM config

thanks!
 
About SDN not working, what SDN features are you using and are not working?
And can you please also check which version of libpve-network-perl you are currently using and were using before the update (you should find that information as well in /var/log/apt/history.log)?
 
Hello @fabian,

Here's the log before the upgrade (I selected the last 3 entries):
Code:
Start-Date: 2025-03-25  12:51:16
Commandline: apt-get dist-upgrade
Install: proxmox-kernel-6.8.12-9-pve-signed:amd64 (6.8.12-9, automatic)
Upgrade: proxmox-widget-toolkit:amd64 (4.3.6, 4.3.7), zfs-zed:amd64 (2.2.7-pve1, 2.2.7-pve2), zfs-initramfs:amd64 (2.2.7-pve1, 2.2.7-pve2), spl:amd64 (2.2.7-pve1, 2.2.7-pve2), libnvpair3linux:amd64 (2.2.7-pve1, 2.2.7-pve2), grub-pc-bin:amd64 (2.06-13+pmx5, 2.06-13+pmx6), libuutil3linux:amd64 (2.2.7-pve1, 2.2.7-pve2), libzpool5linux:amd64 (2.2.7-pve1, 2.2.7-pve2), proxmox-grub:amd64 (2.06-13+pmx5, 2.06-13+pmx6), proxmox-kernel-6.8:amd64 (6.8.12-8, 6.8.12-9), grub-efi-amd64:amd64 (2.06-13+pmx5, 2.06-13+pmx6), proxmox-backup-file-restore:amd64 (3.3.3-1, 3.3.4-1), pve-i18n:amd64 (3.4.0, 3.4.1), grub-efi-amd64-signed:amd64 (1+2.06+13+pmx5, 1+2.06+13+pmx6), proxmox-backup-client:amd64 (3.3.3-1, 3.3.4-1), grub-efi-amd64-bin:amd64 (2.06-13+pmx5, 2.06-13+pmx6), grub2-common:amd64 (2.06-13+pmx5, 2.06-13+pmx6), grub-common:amd64 (2.06-13+pmx5, 2.06-13+pmx6), libzfs4linux:amd64 (2.2.7-pve1, 2.2.7-pve2), zfsutils-linux:amd64 (2.2.7-pve1, 2.2.7-pve2)
End-Date: 2025-03-25  12:58:48

Start-Date: 2025-03-26  08:06:32
Commandline: apt autoremove
Remove: proxmox-kernel-6.8.12-7-pve-signed:amd64 (6.8.12-7)
End-Date: 2025-03-26  08:06:39

Start-Date: 2025-03-28  08:36:26
Commandline: apt-get dist-upgrade
Upgrade: tzdata:amd64 (2025a-0+deb12u1, 2025b-0+deb12u1)
End-Date: 2025-03-28  08:37:09

Here's the log after the upgrade:
Code:
Start-Date: 2025-04-01  10:35:03
Commandline: apt-get dist-upgrade
Upgrade: pve-edk2-firmware-ovmf:amd64 (4.2023.08-4, 4.2025.02-2), pve-firmware:amd64 (3.14-3, 3.15-2), pve-qemu-kvm:amd64 (9.2.0-2, 9.2.0-3), pve-edk2-firmware-legacy:amd64 (4.2023.08-4, 4.2025.02-2), novnc-pve:amd64 (1.5.0-1, 1.6.0-2), pve-edk2-firmware:amd64 (4.2023.08-4, 4.2025.02-2)
End-Date: 2025-04-01  10:37:21

And here's a screenshot of one of my VMs (they mainly have the same configuration):
cWqk4zU2vO.png

1743751314899.png
 
Last edited:
could you add a virtiorng device to the VM and retry?
 
I was working on a test machine running Proxmox VE 8.3.4. To safely test the update process, I used a snapshot taken before applying any updates. My goal was to pinpoint which package update was causing the UEFI network boot option to disappear.

Troubleshooting Steps: I decided to update packages incrementally to isolate the issue.
  1. The first package I tried to upgrade manually was pve-edk2-firmware-ovmf.
  2. When running the command to upgrade this single package (e.g., apt install pve-edk2-firmware-ovmf), the package manager automatically pulled in and upgraded its dependencies as well: pve-edk2-firmware and pve-edk2-firmware-legacy.
  3. After these three packages were successfully updated, I rebooted the test system.
Results: Upon rebooting and entering the VM system's UEFI Boot Manager, I observed that the option for UEFI Network Boot (PXE Boot) had completely vanished. It was no longer listed as an available boot device.

Preliminary Conclusion: Since the issue appeared immediately after updating only pve-edk2-firmware-ovmf and its direct dependencies (pve-edk2-firmware, pve-edk2-firmware-legacy), it strongly suggests that the problem lies within the newer versions of these specific firmware packages. Because of this finding, I didn't proceed with updating other packages, as the cause seems linked to this group.

Package Versions: I've attached an image detailing the specific versions of pve-edk2-firmware-ovmf, pve-edk2-firmware, and pve-edk2-firmware-legacy before and after the problematic update:

1743753887246.png

I hope this information helps.

Thanks!
 
because EDKII implemented a security hardening measure that means network booting requires a source of entropy, if none is found network booting is disabled.
 
About SDN not working, what SDN features are you using and are not working?
And can you please also check which version of libpve-network-perl you are currently using and were using before the update (you should find that information as well in /var/log/apt/history.log)?
About SDN.

I've two guest Proxmox VMs running on my main host. The guest Proxmox VMs get their IPs correctly from the same DHCP as main host. So, maybe it's ok the SDN and the error comes from elsewhere.

However, I cannot start any VMs inside these guest Proxmox instances. It seems to be failing due to a dnsmasq error.

Code:
org.freedesktop.DBus.Error.ServiceUnknown: The name uk.org.thekelleys.dnsmasq.Beta was not provided by any .service files
kvm: -netdev type=tap,id=net0,ifname=tap102i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on: network script /var/lib/qemu-server/pve-bridge failed with status 2816
TASK ERROR: start failed: QEMU exited with code 1
 
because EDKII implemented a security hardening measure that means network booting requires a source of entropy, if none is found network booting is disabled.
I've just created a new VM, and the VirtIO RNG is not automatically added. Would it be better to add it automatically when a new VM is created? I think some people will be surprised that their VMs can no longer PXE boot :(
 
  • Like
Reactions: ctssean