Upgrading from Proxmox 8 to 9 issues; Couldn't find EFI system partition. It is recommended to mount it to /boot or /efi.

tessierp

Member
Mar 28, 2022
21
7
8
Hi,

I upgraded one of my proxmox systems from version 8 to 9. The whole process went fine until the end where I got this message :

Couldn't find EFI system partition. It is recommended to mount it to /boot or /efi.

I didn't think this would really cause an issue but I guess it does now as my system wont boot from the M2 SSD where the OS was stored. It almost seems as if the bootloader isn't there anymore. Not sure if anyone has had the same issue and was able to fix it.

This is the second system I upgraded, first one went very smooth and no issues. Really not sure what happened for this to happen. No VMs were running and all status was green when I checked with PVE8TO9.

These are the last lines of the output from the upgrade

Bash:
Processing triggers for initramfs-tools (0.148.3) ...
update-initramfs: Generating /boot/initrd.img-6.14.8-2-pve
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
Couldn't find EFI system partition. It is recommended to mount it to /boot or /efi.
Alternatively, use --esp-path= to specify path to mount point.
Processing triggers for rsyslog (8.2504.0-1) ...
Processing triggers for postfix (3.10.3-2) ...
Restarting postfix
Processing triggers for pve-ha-manager (5.0.4) ...
Processing triggers for shim-signed:amd64 (1.47+pmx1+15.8-1+pmx1) ...
root@ptr1-pve-2:~# apt modernize-sources
The following files need modernizing:
  - /etc/apt/sources.list
  - /etc/apt/sources.list.d/ceph.list
  - /etc/apt/sources.list.d/pve-enterprise.list
  - /etc/apt/sources.list.d/zabbix.list

I used Rescuezilla to use fdisk and check if somehow the NVME where proxmox is located was missing the boot partition and boot flag but that is still there. What is really weird out of all this is that grub does not even complain, it is as is there is nothing on the NVME anymore... Anyway, hope someone can help or has an idea what happened.

Update #1 :

I tried booting from a Proxmox 9 USB drive and selected Rescue Boot and it will boot my system however, without that, it wont work. Any ideas on how to fix this?

Update #2 :

While Update #1 worked, it gets me to the login prompt but then starts to do some pci reset of some kind... Really strange.

1754439495712.png
 
Last edited:
I think I found the issue. I had to remove 2 PCI cards that I was passthrough to VMs... One was my LSI card and the other a GPU for Jellyfin. Seems like there are passthrough issues with Proxmox V9 perhaps, not sure, I will need to investigate more. Follow-up : Yep, all the PCI device IDs changed and what I was passing through before was no longer valid so I had to identify the addresses again.

So if you are in the same situation I am where your server is passing through PCI devices, make sure you disable your VMs (do not boot on startup). Reboot your system after the upgrade and then verify if the PCI addresses have changed and reassign to your VMs and then test. Otherwise you will have to do like I did and remove those PCI cards in order to boot in your system and disable the VMs and reinstall your PCI cards..
 
Last edited:
But I still have my boot issue. I can only boot to my system through the Proxmox thumbdrive RESCUE Boot.. Any ideas on how to fix that?? I did try 'update-grub' but that didn't fix anything.

Can anyone help? How do I reinstall the boot loader without needed the Rescue boot?
 
I think I know why my system can't find a boot profile. When I execute this 'update-initramfs -u -k all'

The following is returned

1754447420187.png

This is the exact same error message I got when I upgraded (in red). I checked my fstab and it seemed to have removed the boot drive during the upgrade... Anyone knows why???
 
Well looks like I'm finding answers faster than anyone can answer (especially since I'm still awaiting approval for my post to be displayed publicly).. Hopefully this will help anyone who has faced the same issues.. So for the boot issue, I was missing the boot-efi partition, i.e.
'UUID=xxxx-xxxx /boot/efi vfat defaults 0 1' where XXXX-XXXX is my own boot/efi partition ID.. I am not sure why but during the upgrade from proxmox 8 to 9 it was removed....

so what I had to do is this

1) execute 'fdisk -l'

which will get you all your drives partitions and you need to find where your EFI System parition is installed
1754448300626.png

In my case it was /dev/nvme0n1p2

2) execute command 'blkid'

This will give you the IDs of all your partitions and you need to find the device you found above where EFI System is to find its ID

1754448401833.png

In my case it was EB51-AD07

3) Edit /etc/fstab

You will need to add the following to your fstab

1754448456323.png

Basically add your EFI partition using the UUID you found.

4) Reboot

After rebooting you'll need to execute the command 'update-initramfs -u -k all'

It is possible this will complain for each of your kernel profiles and tell you to reinstall grub. Just do what it tells you and then you should be fine

1754448561411.png

What an adventure right?! I hope this helps anyone who got this issue after upgrading to Proxmox 9 where somehow the boot/efi partition is removed.

As for the next issue, passing through the LSI SAS2008, I'm on something and I'll see if it works and post a follow-up here.
 
Alright so I found the issue. After doing some research there were a couple of additional steps I needed to do after upgrading from Proxmox 8 to Proxmox 9. I'm also still not sure why upgrading to Proxmox 9 also changed all the addresses assigned to my PCI cards.. But anyway.

On Proxmox 8 all I had to do was to find the ID of my PCI card, add that hardware to the VM and set it up like so

1754449073391.png

Turns out after upgrading to Proxmox 9 that was no longer sufficient. I had to blacklist the driver so that proxmox wouldn't load those drivers and also prevent Proxmox from using the card at all. Here are the steps..

1) Blacklist the driver used by the card. To do that execute the command 'lspci | grep -i LSI' which will return

1754449205385.png

At the start you have 0b:00.0 (or 000b:00:0) which is the address assigned to your LSI card by the system.

2) Execute the command 'lspci -nn -s 0000:0b:00 -v' which will return something like this

1754449369140.png
This gives you two pieces of information you will need.. At the top far right, [1000:0072] which is I think is the id of the card itself. The second piece of information is the kernal modules it uses which in my case is 'mpt3sas'.

3) You need to blacklist the module by editing /etc/modprob.d/blacklist.conf

Add the following at the end of your file if you have anything else in there :
'blacklist mpt3sas'

Here is an example of what I have in mine

1754449640418.png

4) Edit the file /etc/modprob.d/vfio.conf

Add the following at the end of your file

'options vfio-pci ids=1000:0072'

This will prevent proxmox from using that PCI card.. If the ID is different for you, you'll need to replace 1000:0072 by what ever you find in step 2.

5) Last but now least, execute the following

'update-initramfs -u -k all'

6) Reboot

And Voilà! You should be good to go and able to start your VM without any issues.
 

Attachments

  • 1754449599619.png
    1754449599619.png
    17.5 KB · Views: 1
Again I hope this was useful to anyone having issues upgrading to Proxmox 9. Like I said before, I have a few Proxmox systems, the first one didn't have anything special like PCI passthroughs and I didn't have issues there. The second system which is the reason for this post, I had all those issues. Again I'm not sure why my EFI Boot partition was removed during the upgrade process. Hopefully for most of you the upgrade will be a smooth ride.
 
the boot issues are because you didn't read the `pve8to9` output (or didn't run it at all?) - for most systems, the systemd-boot package needs to be removed, and possibly replaced by installing systemd-boot-tools and systemd-boot-efi, depending on whether systemd-boot is in use or not.

I am not sure why your PCI IDs changed..
 
the boot issues are because you didn't read the `pve8to9` output (or didn't run it at all?) - for most systems, the systemd-boot package needs to be removed, and possibly replaced by installing systemd-boot-tools and systemd-boot-efi, depending on whether systemd-boot is in use or not.

I am not sure why your PCI IDs changed..
You assume I didn't read of even run 'pve8to9' when it is literally in the first few lines of my first post. When you want to help someone perhaps it is best to read?

I did run the command and besides a few warnings that I looked into there was nothing regarding the upgrade affecting its ability to boot. This is not my first upgrade as all of my systems went from pve7to8 and now pve8to9 and never had an issue before. But perhaps it was in one of the warnings and missed it? For something like that, it would have been marked as red?

Is there a post upgrade log somewhere I could post here that could help maybe understand what happened?
 
Last edited:
Thanks for the great explanation, that makes sense. I also have an efi boot system.
I can also comment, that my system did not boot, but grub crashed after the migration.
I have run pve8to9 -all with no warnings.
Now, the system is manually reinstalled.

I have also verified the hint with the grub bug, but the relevant package was already installed.
 
  • Like
Reactions: krom12
it shouldn't prevent your system from booting, it will print some warnings and cause apt invocations to fail at the very end, or switch over your active bootloader to systemd-boot, neither of which is desired of course.
 
Thanks for the great explanation, that makes sense. I also have an efi boot system.
I can also comment, that my system did not boot, but grub crashed after the migration.
I have run pve8to9 -all with no warnings.
Now, the system is manually reinstalled.

I have also verified the hint with the grub bug, but the relevant package was already installed.

that sounds like a different issue..
 
I too have had a lot of problems with the PVE 8 to 9 upgrades.
Both systems I tried it on failed to boot after the upgrade.

I did see the warning. I did apt remove systemd-boot on both systems. I also made sure that systemd-boot-tools and systemd-boot-efi were installed. Still both failed to boot after.

Unfortunately I can't verify that the steps that @tessierp did with his fstab file work since I reinstalled both systems. A fresh installation seems to work just fine.

The Rescue Boot from the Proxmox installation did boot each system. I just didn't do as much digging as @tessierp was able to do.

This seems like a HUGE deal.
 
  • Like
Reactions: tessierp
there are a few reports about Grub not working after the upgrade, those seem unrelated to systemd-boot and are probably associated with the big Grub update from 2.06 to 2.12 that comes with Trixie. could you describe the symptoms of "failed to boot after the upgrade" in more detail?
 
grub crashed after the migration

Hi, all.

I fixed the crashed grub. The problem was with a grubx64.efi file. I just run:
cp -v /boot/efi/EFI/proxmox/grubx64.efi /boot/efi/EFI/debian/grubx64.efi
And reboot to new Proxmox v9 working system.

I need that because I have this proxmox boot lines by default pointed to /debian/ path:
root@pve22:~# efibootmgr
BootCurrent: 000B
Timeout: 3 seconds
BootOrder: 000B,0002,000C,000D,000E,000F,0010,0011,0012,0013
Boot0002* debian HD(1,GPT,xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,0x800,0x200000)/File(\EFI\DEBIAN\GRUBX64.EFI)0000424f
Boot000B* debian HD(1,GPT,xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,0x800,0x200000)/File(\EFI\DEBIAN\GRUBX64.EFI)0000424f
...

But despite this, default file /boot/efi/EFI/debian/grubx64.efi does not updated automatically with proxmox from 8 to 9.
And I copied it manually:

total 4492
drwx------ 2 root root 4096 dec 7 2024 .
drwx------ 5 root root 4096 aug 7 10:13 ..
-rwx------ 1 root root 112 dec 7 2024 BOOTX64.CSV
-rwx------ 1 root root 88568 dec 7 2024 fbx64.efi
-rwx------ 1 root root 152 dec 7 2024 grub.cfg
-rwx------ 1 root root 2685544 aug 7 11:34 grubx64.efi
-rwx------ 1 root root 851368 dec 7 2024 mmx64.efi
-rwx------ 1 root root 952384 dec 7 2024 shimx64.efi

i'll change the `efibootmgr` record later to point it to `/boot/efi/EFI/proxmox/grubx64.efi` to avoid such problems in the future.
 
need that because I have this proxmox boot lines by default pointed to /debian/ path:
root@pve22:~# efibootmgr
BootCurrent: 000B
Timeout: 3 seconds
BootOrder: 000B,0002,000C,000D,000E,000F,0010,0011,0012,0013
Boot0002* debian HD(1,GPT,xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,0x800,0x200000)/File(\EFI\DEBIAN\GRUBX64.EFI)0000424f
Boot000B* debian HD(1,GPT,xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,0x800,0x200000)/File(\EFI\DEBIAN\GRUBX64.EFI)0000424f
...

But despite this, default file /boot/efi/EFI/debian/grubx64.efidoes not updated automatically with proxmox from 8 to 9.
And I copied it manually:
good find regarding the mismatch of grubx64.efi - how was this system setup?
* proxmox VE ISO or on top of debian?
* when was it setup ? (debian version PVE version)?

This should help us in reproducing the issue.
Thanks!