[SOLVED] Issue with UEFI Firmware Update on Linux VMs in Proxmox

Gabro88

New Member
Dec 27, 2024
10
2
3
Hi everyone,
I’m new to the Proxmox community and have encountered an issue that I can’t seem to resolve, despite trying several steps and getting help from ChatGPT. I hope someone here can assist me.

I have a server running Proxmox, and all Linux virtual machines (Debian 12 and Ubuntu 22.04 and 24.04) hosted on this hypervisor fail to update the UEFI firmware using fwupd. When I run the command:
fwupdmgr upgrade

I get the following error:
WARNING: UEFI capsule updates not available or enabled in firmware setup
See https://github.com/fwupd/fwupd/wiki/PluginFlag:capsules-unsupported for more information.
...
failed to write data to efivarfs: Error writing to file descriptor: Invalid argument

From the output of dmesg | grep -i efi, I can confirm that the virtual machine is booted in EFI mode, but this line also appears:
Kernel is locked down from EFI Secure Boot mode; see man kernel_lockdown.7

Initially, I suspected that Secure Boot was the issue, so I disabled it on the Proxmox host. However, the problem persists, and the error remains unchanged.

Steps I’ve Tried

  1. EFI Support Confirmed:
    The virtual machines are running with UEFI firmware (OVMF), and the efivars file system is mounted correctly.
  2. Secure Boot Disabled:
    I disabled Secure Boot on the Proxmox host, but it didn’t resolve the issue.
  3. Updated Proxmox and OVMF:
    I updated the Proxmox system and packages to the latest version:
  4. Checked Permissions on efivars:
    I verified the permissions on the efivars file system and enabled write access:
  5. Debugging with fwupd:
    I analyzed the logs of fwupd using: Unfortunately, there are no additional messages explaining why the UEFI capsule support is unavailable.

Thanks in advance for any advice or guidance!
 
Last edited:
fwupdmgr must be run on host not in VM.
FYI, VM use a virtual Bios/EFI provided by PVE, not updatable by guest, managed by PVE itself.
I've tried running fwupdmgr on the host, but I get this result.
It' seems the UEFI partition isn't mounted... but it is mounted and the system is working in UEFI mode.

WARNING: UEFI ESP partition not detected or configured
See https://github.com/fwupd/fwupd/wiki/PluginFlag:esp-not-found for more information.
WARNING: Will measure elements of system integrity around an update
See https://github.com/fwupd/fwupd/wiki/PluginFlag:measure-system-integrity for more information.
Devices with no available firmware updates:
• UEFI Device Firmware
• INTEL SSDPEKNW512G8L
Code:
Devices with the latest available firmware version:
 • System Firmware
• UEFI Device Firmware
• UEFI dbx
• Micron 2300 NVMe 512GB

And this is the log (journalctl -u fwupd) :

Dec 28 11:08:39 HomeProxmox systemd[1]: Starting fwupd.service - Firmware update daemon...
Dec 28 11:08:40 HomeProxmox fwupd[3136]: 10:08:40.510 FuPluginUefiCapsule cannot find default ESP: No ESP or BDP found
Dec 28 11:08:42 HomeProxmox systemd[1]: Started fwupd.service - Firmware update daemon.
Dec 28 11:08:42 HomeProxmox fwupd[3136]: 10:08:42.137 FuPluginLinuxSwap could not parse /proc/swaps: failed to call org.freedesktop.UDisks2.Manager.GetBlockDevices(): The name org.freedesktop.UDisks2 was not provided by any .service files

Thanks in advance for any advice or guidance!
 
I've got a .net core SW that needs of the secure boot enable.
My vm has the SB enabled, but I've some issue.

In an another environment (vmWare) to solve the same issue I've updated the UEFI firmware of the VM and this SW start working correctly.

I've already updated the MB BIOS but the problem still exists.

Thanks a lot for the support and the information.
I'll try another way to fix this problem.

:)
 
Sorry, maybe I misspoke.

On another server with vmWare esxi I solved the problem executing the "fwupdmgr upgrade" directly on the vm.
But, I understand that Proxmox work in a different way, so now I try to find a new way to solve my problem.

Thanks a lot for support
 
The strange thing, in the Proxmox environment, is that: in the physical host the UEFI Firmware seems to be updated, but in the VM nope.
I attach the output of the get-devices executed on the host and on the vm.

Thanks a lot for support :)
 

Attachments

I found my mistake!!!!

When I've created the VM I've used the legacy cipset (i440fx - default).
Now I've changed the chipset in q35 and I've, correctly, the latest version of the UEFI firmware :)

Thank you very much for the support!
:)
 
  • Like
Reactions: Kingneutron
fwupdmgr must be run on host not in VM.
FYI, VM use a virtual Bios/EFI provided by PVE, not updatable by guest, managed by PVE itself.
Could you please let me know how to update the bios for a ubuntu vm on Proxmox? Thank you a lot!
 
Could you please let me know how to update the bios for a ubuntu vm on Proxmox? Thank you a lot!
As Gabriel has wrote, you need to upgrade the physical server (or pc).
Initially, I don't see the vm updated because I used the legacy chipset (i440fx - default).

Once I've changed the chipset in q35, I see the firmware updated correctly..