I have a mini PC that has been happily running PVE for over a year, and recently has been on the latest release of PVE8 and was using 6.8.12-13-pve most recently.
Today I did an in-place upgrade to PVE9 by following https://pve.proxmox.com/wiki/Upgrade_from_8_to_9 and all went well through the upgrade process until time came to reboot. At booting I was presented with the following:

Booting with advanced mode and picking the 6.14.8-2-pve recovery option led to the same errors plus a constant stream of messages like:

At any point if I rebooted back in to 6.8.12-13-pve the system booted normally.
fstab looks like this:
#
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=4C18-6924 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
And I attempted to boot by changing UUID=4C18-6924 to /dev/sda2 as per lsblk output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 476.9G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part /boot/efi
└─sda3 8:3 0 475.9G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 96G 0 lvm /
├─pve-data_tmeta 252:2 0 3.6G 0 lvm
│ └─pve-data-tpool 252:4 0 348.8G 0 lvm
│ ├─pve-data 252:5 0 348.8G 1 lvm
│ ├─pve-vm--103--disk--0 252:6 0 4G 0 lvm
│ ├─pve-vm--104--disk--0 252:7 0 2G 0 lvm
│ ├─pve-vm--101--disk--0 252:8 0 5G 0 lvm
│ ├─pve-vm--102--disk--0 252:9 0 8G 0 lvm
│ ├─pve-vm--107--disk--0 252:10 0 10G 0 lvm
│ ├─pve-vm--106--disk--0 252:11 0 4M 0 lvm
│ ├─pve-vm--106--disk--1 252:12 0 32G 0 lvm
│ └─pve-vm--108--disk--0 252:13 0 8G 0 lvm
└─pve-data_tdata 252:3 0 348.8G 0 lvm
└─pve-data-tpool 252:4 0 348.8G 0 lvm
├─pve-data 252:5 0 348.8G 1 lvm
├─pve-vm--103--disk--0 252:6 0 4G 0 lvm
├─pve-vm--104--disk--0 252:7 0 2G 0 lvm
├─pve-vm--101--disk--0 252:8 0 5G 0 lvm
├─pve-vm--102--disk--0 252:9 0 8G 0 lvm
├─pve-vm--107--disk--0 252:10 0 10G 0 lvm
├─pve-vm--106--disk--0 252:11 0 4M 0 lvm
├─pve-vm--106--disk--1 252:12 0 32G 0 lvm
└─pve-vm--108--disk--0 252:13 0 8G 0 lvm
Filesystem Size Used Avail Use% Mounted on
udev 8.3G 0 8.3G 0% /dev
tmpfs 1.7G 2.8M 1.7G 1% /run
/dev/mapper/pve-root 101G 24G 72G 25% /
tmpfs 8.3G 36M 8.3G 1% /dev/shm
efivarfs 197k 90k 102k 47% /sys/firmware/efi/efivars
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 1.1M 0 1.1M 0% /run/credentials/systemd-journald.service
tmpfs 8.3G 0 8.3G 0% /tmp
/dev/sda2 1.1G 9.2M 1.1G 1% /boot/efi
/dev/fuse 135M 46k 135M 1% /etc/pve
Any ideas what is causing this with the latest kernel?
Today I did an in-place upgrade to PVE9 by following https://pve.proxmox.com/wiki/Upgrade_from_8_to_9 and all went well through the upgrade process until time came to reboot. At booting I was presented with the following:

Booting with advanced mode and picking the 6.14.8-2-pve recovery option led to the same errors plus a constant stream of messages like:

At any point if I rebooted back in to 6.8.12-13-pve the system booted normally.
fstab looks like this:
#
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=4C18-6924 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
And I attempted to boot by changing UUID=4C18-6924 to /dev/sda2 as per lsblk output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 476.9G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part /boot/efi
└─sda3 8:3 0 475.9G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 96G 0 lvm /
├─pve-data_tmeta 252:2 0 3.6G 0 lvm
│ └─pve-data-tpool 252:4 0 348.8G 0 lvm
│ ├─pve-data 252:5 0 348.8G 1 lvm
│ ├─pve-vm--103--disk--0 252:6 0 4G 0 lvm
│ ├─pve-vm--104--disk--0 252:7 0 2G 0 lvm
│ ├─pve-vm--101--disk--0 252:8 0 5G 0 lvm
│ ├─pve-vm--102--disk--0 252:9 0 8G 0 lvm
│ ├─pve-vm--107--disk--0 252:10 0 10G 0 lvm
│ ├─pve-vm--106--disk--0 252:11 0 4M 0 lvm
│ ├─pve-vm--106--disk--1 252:12 0 32G 0 lvm
│ └─pve-vm--108--disk--0 252:13 0 8G 0 lvm
└─pve-data_tdata 252:3 0 348.8G 0 lvm
└─pve-data-tpool 252:4 0 348.8G 0 lvm
├─pve-data 252:5 0 348.8G 1 lvm
├─pve-vm--103--disk--0 252:6 0 4G 0 lvm
├─pve-vm--104--disk--0 252:7 0 2G 0 lvm
├─pve-vm--101--disk--0 252:8 0 5G 0 lvm
├─pve-vm--102--disk--0 252:9 0 8G 0 lvm
├─pve-vm--107--disk--0 252:10 0 10G 0 lvm
├─pve-vm--106--disk--0 252:11 0 4M 0 lvm
├─pve-vm--106--disk--1 252:12 0 32G 0 lvm
└─pve-vm--108--disk--0 252:13 0 8G 0 lvm
Filesystem Size Used Avail Use% Mounted on
udev 8.3G 0 8.3G 0% /dev
tmpfs 1.7G 2.8M 1.7G 1% /run
/dev/mapper/pve-root 101G 24G 72G 25% /
tmpfs 8.3G 36M 8.3G 1% /dev/shm
efivarfs 197k 90k 102k 47% /sys/firmware/efi/efivars
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 1.1M 0 1.1M 0% /run/credentials/systemd-journald.service
tmpfs 8.3G 0 8.3G 0% /tmp
/dev/sda2 1.1G 9.2M 1.1G 1% /boot/efi
/dev/fuse 135M 46k 135M 1% /etc/pve
Any ideas what is causing this with the latest kernel?