Hi. After struggling a little to upgrade from Proxmox 8 to 9 on bare metal servers hosted by OVH, I thought I'd explain how I succeeded.
First of all, this is not a generic tutorial, this is just how I could solve my problem on my 2 servers (one "So-yo-start SYS-LE-2" and one "ADVANCE-1 | AMD EPYC 4244P"). It might be different depending on your config, your server model, your partition table, raid etc... So use wisely. Backup first and don't do that in production...
Servers have 2 1TB SSD, they use soft raid, and have an LVM partition for /var/lib/vz. No ZFS.
First, I upgraded them following the officiel guide (https://pve.proxmox.com/wiki/Upgrade_from_8_to_9).
- Back up everything
- Shut down all VM
Check everything was OK
	
	
	
		
Update to latest proxmox 8
	
	
	
		
Check we are in 8.4.1+
	
	
	
		
change repo
	
	
	
		
Upgrade
	
	
	
		
During the process check config differences when necessary and make decision
# Reboot
	
	
	
		
Everything went smooth during the upgrade, but I wasn't able to boot after that. The booting process failed and ended up in the bios of the server without any message in the console to help...
-----------------
[EDITED ON 2025/08/29]
PLEASE SKIP THE REST OF THIS MESSAGE AND JUMP TO THE NEXT POST by @sbraz : THE OVH TEAM PROVIDED A PROPER SOLUTION TO SOLVE THIS PROBLEM
If you have already applied my solution below, you should check your EFI boot order with efibootmgr and restore PXE in 1st place if it's not
-----------------
So I rebooted in rescue mode and decided to reinstall and reconfigure grub-efi-amd64. I'm not sure all those steps where necessary, but anyway, this was able to make my servers boot again.
So once in rescue mode
Identifying partition and mounting points to mount /root
	
	
	
		
In my case, it looked like that :
Prepare the /mnt if not present
	
	
	
		
I mounted the boot and root partitions according to the partition table
	
	
	
		
Mounting the 1st EFI partition (that is on first SSD)
	
	
	
		
Preparing for chroot
	
	
	
		
Chrooting to reinstall grub
	
	
	
		
Reinstalling grub-efi-amd64 and shim-signed bootloader
	
	
	
		
I had some warnings saying
SO i mount efivarfs to check entries in boot manager
	
	
	
		
# Check boot manager entries
	
	
	
		
If proxmox not present
	
	
	
		
Else : check that it's correctly set up. If not correctly set up, delete and recreate proxmox entry properly
	
	
	
		
Unmount efivarfs
	
	
	
		
Exit chroot
	
	
	
		
Unmounting 1st EFI partition
	
	
	
		
Mounting 2nd EFI partition : EFI partition are not in the raid, so for consistency, I install grub on both. Not sure if it's necessary
	
	
	
		
chrooting to reinstall grub in 2nd partition
	
	
	
		
Exit chroot
	
	
	
		
Unmounting all mounts
	
	
	
		
Rebooting, fingers crossed...
	
	
	
		
And that did the trick for me
				
			First of all, this is not a generic tutorial, this is just how I could solve my problem on my 2 servers (one "So-yo-start SYS-LE-2" and one "ADVANCE-1 | AMD EPYC 4244P"). It might be different depending on your config, your server model, your partition table, raid etc... So use wisely. Backup first and don't do that in production...
Servers have 2 1TB SSD, they use soft raid, and have an LVM partition for /var/lib/vz. No ZFS.
First, I upgraded them following the officiel guide (https://pve.proxmox.com/wiki/Upgrade_from_8_to_9).
- Back up everything
- Shut down all VM
Check everything was OK
		Code:
	
	pve8to9
pve8to9 --full
	Update to latest proxmox 8
		Code:
	
	apt update && apt dist-upgrade
	Check we are in 8.4.1+
		Code:
	
	pveversion
	pve-manager/8.4.11/14a32011146091ed (running kernel: 6.8.12-13-pve)
change repo
		Code:
	
	sed -i 's/bookworm/trixie/g' /etc/apt/sources.list
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list.d/*
	Upgrade
		Code:
	
	apt update
apt dist-upgrade
	# Reboot
		Code:
	
	reboot
	Everything went smooth during the upgrade, but I wasn't able to boot after that. The booting process failed and ended up in the bios of the server without any message in the console to help...
-----------------
[EDITED ON 2025/08/29]
PLEASE SKIP THE REST OF THIS MESSAGE AND JUMP TO THE NEXT POST by @sbraz : THE OVH TEAM PROVIDED A PROPER SOLUTION TO SOLVE THIS PROBLEM
If you have already applied my solution below, you should check your EFI boot order with efibootmgr and restore PXE in 1st place if it's not
-----------------
So I rebooted in rescue mode and decided to reinstall and reconfigure grub-efi-amd64. I'm not sure all those steps where necessary, but anyway, this was able to make my servers boot again.
So once in rescue mode
Identifying partition and mounting points to mount /root
		Code:
	
	lsblk -f
	In my case, it looked like that :
nvme0n1
├─nvme0n1p1   vfat              FAT16            EFI_SYSPART    
├─nvme0n1p2   linux_raid_member 1.2              md2            
│ └─md2       ext4              1.0              boot           
├─nvme0n1p3   linux_raid_member 1.2              md3            
│ └─md3       ext4              1.0              root           
├─nvme0n1p4   swap              1                swap-nvme0n1p4 
├─nvme0n1p5   linux_raid_member 1.2              md5            
│ └─md5       LVM2_member       LVM2 001                        
│   └─vg-data ext4              1.0              var-lib-vz     
├─nvme0n1p6   linux_raid_member 1.2              md6            
│ └─md6       ext4              1.0              var-log        
└─nvme0n1p7   iso9660           Joliet Extension config-2       
nvme1n1
├─nvme1n1p1   vfat              FAT16            EFI_SYSPART    
├─nvme1n1p2   linux_raid_member 1.2              md2            
│ └─md2       ext4              1.0              boot           
├─nvme1n1p3   linux_raid_member 1.2              md3            
│ └─md3       ext4              1.0              root           
├─nvme1n1p4   swap              1                swap-nvme1n1p4 
├─nvme1n1p5   linux_raid_member 1.2              md5            
│ └─md5       LVM2_member       LVM2 001                        
│   └─vg-data ext4              1.0              var-lib-vz     
└─nvme1n1p6   linux_raid_member 1.2              md6            
  └─md6       ext4              1.0              var-log
Prepare the /mnt if not present
		Code:
	
	mkdir -p /mnt
	I mounted the boot and root partitions according to the partition table
		Code:
	
	mount /dev/md3 /mnt
mount /dev/md2 /mnt/boot
	Mounting the 1st EFI partition (that is on first SSD)
		Code:
	
	mount /dev/nvme0n1p1 /mnt/boot/efi
	Preparing for chroot
		Code:
	
	mount --bind /dev /mnt/dev
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
mount --bind /dev/pts /mnt/dev/pts
	Chrooting to reinstall grub
		Code:
	
	chroot /mnt /bin/bash
	Reinstalling grub-efi-amd64 and shim-signed bootloader
		Code:
	
	apt update
apt install --reinstall grub-efi-amd64 shim-signed
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=proxmox --recheck --no-floppy
update-grub
grub-install /dev/nvme0n1
grub-install /dev/nvme1n1
	I had some warnings saying
warning: EFI variables cannot be set on this system.
warning: You will have to complete the GRUB setup manually.
SO i mount efivarfs to check entries in boot manager
		Code:
	
	mount -t efivarfs efivarfs /sys/firmware/efi/efivars
	# Check boot manager entries
		Code:
	
	efibootmgr -v
	If proxmox not present
		Code:
	
	efibootmgr --create --disk /dev/nvme0n1 --part 1 --label "proxmox" --loader '\EFI\proxmox\grubx64.efi'
	
		Code:
	
	efibootmgr -b <ID_OF_ENTRY_TO_REMOVE> -B
efibootmgr --create --disk /dev/nvme0n1 --part 1 --label "proxmox" --loader '\EFI\proxmox\grubx64.efi'
	Unmount efivarfs
		Code:
	
	umount /sys/firmware/efi/efivars
	Exit chroot
		Code:
	
	exit
	Unmounting 1st EFI partition
		Code:
	
	umount /mnt/boot/efi
	Mounting 2nd EFI partition : EFI partition are not in the raid, so for consistency, I install grub on both. Not sure if it's necessary
		Code:
	
	mount /dev/nvme1n1p1 /mnt/boot/efi
	chrooting to reinstall grub in 2nd partition
		Code:
	
	chroot /mnt /bin/bash
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=proxmox --recheck --no-floppy
	Exit chroot
		Code:
	
	exit
	Unmounting all mounts
		Code:
	
	umount -R /mnt
	Rebooting, fingers crossed...
		Code:
	
	reboot
	And that did the trick for me
			
				Last edited: 
				
		
	
										
										
											
	
										
									
								
	
	