Thanks, took me a bit of time to decipher your post and figure out what to do (don't work much in Linux or whatever flavor Debian is).
I didn't have the eno1/2 issue you had, In my ifconfig/ip a -- post upgrade -- it seemed to have linked an alias for it when it booted up.
Anyway, I'll add some additional details for anyone who has this same issue.
- in the IPMI console, when the refind menu comes up, see which one of the two EFI listed allows you to boot. Just go through and select each one to boot until it doesn't bring you into rescue mode
- After booting, you want to mount both EFI System partitions and see which one has the newer grub file.
- Copy that over and reboot
root@ns56:~# fdisk -l
Disk /dev/sda: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: HGST HUS724040AL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Device Start End Sectors Size Type
/dev/sda1 2048 1048575 1046528 511M EFI System
/dev/sda2 1048576 42991615 41943040 20G Linux RAID
/dev/sda3 42991616 45088767 2097152 1G Linux filesystem
/dev/sda4 45088768 7814031359 7768942592 3.6T Linux RAID
/dev/sda5 7814033072 7814037134 4063 2M Linux filesystem
Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: HGST HUS724040AL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Device Start End Sectors Size Type
/dev/sdb1 2048 1048575 1046528 511M EFI System
/dev/sdb2 1048576 42991615 41943040 20G Linux RAID
/dev/sdb3 42991616 45088767 2097152 1G Linux filesystem
/dev/sdb4 45088768 7814031359 7768942592 3.6T Linux RAID
Disk /dev/md2: 19.98 GiB, 21458059264 bytes, 41910272 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md4: 3.62 TiB, 3977564389376 bytes, 7768680448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/vg-data: 3.62 TiB, 3977563340800 bytes, 7768678400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@ns56:~#
mkdir /mnt/efi1
mkdir /mnt/efi2
root@ns56:~# mount /dev/sda1 /mnt/efi1
root@ns56:~# mount /dev/sdb1 /mnt/efi2
root@ns56:/mnt/efi1/EFI/proxmox# ls -l
total 144
-rwxr-xr-x 1 root root 147456 Dec 3 18:56 grubx64.efi
root@ns56:/mnt/efi1/EFI/proxmox# ls -l /mnt/efi2/EFI/proxmox
total 152
-rwxr-xr-x 1 root root 151552 Dec 8 00:23 grubx64.efi
root@ns56:/mnt/efi1/EFI/proxmox#
root@ns56:/mnt/efi1/EFI/proxmox# cp grubx64.efi grubx64.efi.120721a
root@ns56:/mnt/efi1/EFI/proxmox# cp /mnt/efi2/EFI/proxmox/grubx64.efi .
root@ns56:/mnt/efi1/EFI/proxmox# ls -l
total 296
-rwxr-xr-x 1 root root 151552 Dec 8 00:55 grubx64.efi
-rwxr-xr-x 1 root root 147456 Dec 8 00:55 grubx64.efi.120721a
Hey guys, i had the same issue on a couple of servers we just got from soyoustart, i found the reason there is an issue and a solution.
The issue is because the servers have multiple disk drives, in my case 2 nvmes. OVH created EFI partitions on both drives, with the same label, so you can't be sure which partition is mounted on boot on these servers. They have this in their fstab :
LABEL=EFI_SYSPART /boot/efi vfat defaults 0 1
At first when we received the servers, both partitions were identical. After upgrade the files were updated on the partition that was mounted but not the other (which is quite understandable indeed).
diff -r /boot/efi/ /mnt/efi/
Only in /boot/efi/EFI: BOOT
Binary files /boot/efi/EFI/proxmox/grubx64.efi and /mnt/efi/EFI/proxmox/grubx64.efi differ
I manually copied the content of the upgraded partition to the other efi partition on the other disk and on reboot it worked fine!
About refind, it is actually not installed on the servers, it is what is launched from pxe in normal condition, to load the efi loader from one of the partitions of the server. Setting the bios to boot from hard drive directly not only prevents your server from booting to rescue in case of issue, or from reinstalling, it can also prevent your server from booting in case one of the drives of the server fails...
My solution should be the closest to what the servers were set up initially, and be "kind of" future proof.
This is all because of what OVH/Soyoustart did to configure the servers, and this is not, i believe, documented somewhere!
Another thing, i had issue with network because a line was added to /etc/default/grub after upgrade, that caused the network devices to change their name (from eno1/eno2 to eth0/eth1), thus breaking the vmbr0 interface. I had to remove it and then update-grub.
I believe you can do most of this if not all from rescue mode if you have the issues after upgrading...
Fyi, I installed ifupdown2 before upgrading, and did not forget to add the mac address to /etc/network/interfaces as explained in the release notes.
Both servers are now happily up and running Proxmox VE 7.1.6!