Proxmox Virtual Environment 9.0 released!

I actually haven't tried the upgrade yet. But from my understanding, if you follow the procedure on the Wiki, after the install is completed (you are dropped back to a command prompt), but before restarting, if you run pve8to9, it will tell you what you need to do to fix the bootloader so your system will reboot.

Is your system not booting?
The thing is, I rebooted immediately after it finished upgrading. So have not removed systemd-boot and since I have rebooted, my window to remove it is now gone.

The node reboots and restarts fine and all VM's are running normally but

When I run pve8to9 (now, post install and reboot) I get this:

INFO: Checking bootloader configuration...
INFO: systemd-boot used as bootloader and fitting meta-package installed.

And when I run Proxmox-boot-tools status:

root@pve:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.


Some more info that might help:

Code:
root@pve:~# efibootmgr
BootCurrent: 0000
Timeout: 1 seconds
BootOrder: 0000,0004,0003,0001,0002
Boot0000* proxmox       HD(2,GPT,ac77b0d5-c289-4e79-b2b2-6c152b465d82,0x800,0x200000)/File(\EFI\PROXMOX\SHIMX64.EFI)
Boot0001* UEFI: PXE IPv4 Intel(R) Ethernet Controller (3) I225-V        PciRoot(0x0)/Pci(0x1c,0x6)/Pci(0x0,0x0)/MAC(88aedd61a160,1)/IPv4(0.0.0.00.0.0.0,0,0)0000424f
Boot0002* UEFI: PXE IPv6 Intel(R) Ethernet Controller (3) I225-V        PciRoot(0x0)/Pci(0x1c,0x6)/Pci(0x0,0x0)/MAC(88aedd61a160,1)/IPv6([::]:<->[::]:,0,0)0000424f
Boot0003* UEFI OS       HD(2,GPT,ac77b0d5-c289-4e79-b2b2-6c152b465d82,0x800,0x200000)/File(\EFI\BOOT\BOOTX64.EFI)0000424f
Boot0004* Linux Boot Manager    HD(2,GPT,ac77b0d5-c289-4e79-b2b2-6c152b465d82,0x800,0x200000)/File(\EFI\SYSTEMD\SYSTEMD-BOOTX64.EFI)
root@pve:~# bootctl status
System:
      Firmware: n/a (n/a)
 Firmware Arch: x64
   Secure Boot: disabled (unknown)
  TPM2 Support: yes
  Measured UKI: no
  Boot into FW: supported

Current Boot Loader:
      Product: GRUB 2.12-9+pmx2
     Features: ✗ Boot counting
               ✗ Menu timeout control
               ✗ One-shot menu timeout control
               ✗ Default entry control
               ✗ One-shot entry control
               ✗ Support for XBOOTLDR partition
               ✗ Support for passing random seed to OS
               ✗ Load drop-in drivers
               ✗ Support Type #1 sort-key field
               ✗ Support @saved pseudo-entry
               ✗ Support Type #1 devicetree field
               ✗ Enroll SecureBoot keys
               ✗ Retain SHIM protocols
               ✗ Menu can be disabled
               ✗ Multi-Profile UKIs are supported
               ✓ Boot loader set partition information
    Partition: /dev/disk/by-partuuid/ac77b0d5-c289-4e79-b2b2-6c152b465d82

Random Seed:
 System Token: set
       Exists: yes

Available Boot Loaders on ESP:
          ESP: /boot/efi (/dev/disk/by-partuuid/ac77b0d5-c289-4e79-b2b2-6c152b465d82)
         File: ├─/EFI/systemd/systemd-bootx64.efi (systemd-boot 257.7-1)
System:
      Firmware: n/a (n/a)
 Firmware Arch: x64
   Secure Boot: disabled (unknown)
  TPM2 Support: yes
  Measured UKI: no
  Boot into FW: supported

Current Boot Loader:
      Product: GRUB 2.12-9+pmx2
     Features: ✗ Boot counting
               ✗ Menu timeout control
               ✗ One-shot menu timeout control
               ✗ Default entry control
               ✗ One-shot entry control
               ✗ Support for XBOOTLDR partition
               ✗ Support for passing random seed to OS
               ✗ Load drop-in drivers
               ✗ Support Type #1 sort-key field
               ✗ Support @saved pseudo-entry
               ✗ Support Type #1 devicetree field
               ✗ Enroll SecureBoot keys
               ✗ Retain SHIM protocols
               ✗ Menu can be disabled
               ✗ Multi-Profile UKIs are supported
               ✓ Boot loader set partition information
    Partition: /dev/disk/by-partuuid/ac77b0d5-c289-4e79-b2b2-6c152b465d82

Random Seed:
 System Token: set
       Exists: yes

Available Boot Loaders on ESP:
          ESP: /boot/efi (/dev/disk/by-partuuid/ac77b0d5-c289-4e79-b2b2-6c152b465d82)
         File: ├─/EFI/systemd/systemd-bootx64.efi (systemd-boot 257.7-1)
               ├─/EFI/BOOT/fbx64.efi
               ├─/EFI/BOOT/grubx64.efi
               ├─/EFI/BOOT/mmx64.efi
               └─/EFI/BOOT/BOOTx64.efi (systemd-boot 257.7-1)

root@pve:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=E2AC-2BE0 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
root@pve:~#
 
I reinstalled the first cluster node with 9.0.3 and it worked pretty well.

I had only those problems:

Setup > Graphical Installer -> ZFS Raid1

- /etc/apt/sources.list was empty after install. Needed to fill it with Trixie stuff
- The new one was local and local-zfs because of the setup. The other nodes hat local-lvm. While joining the local-zfs was removed and a local-lvm - unavailable on this node - was added. needed manualy fix in the store setup of the cluster

I used delnode to remove that node from the cluster. This worked pretty fine, except the ssh keys was keeped from the old node. This needed some work because the reinstalled node has the same name.

Otherwise, really cool, thank you for the great work.
 
Last edited:
* run proxmox-boot-tool reinit

I was unable to boot after an upgrade:

Code:
error: file `/grub/x86_64-efi/bli.mod' not found.

bli.mod is present in /boot/grub/x86_64-efi/bli.mod , but was absent from /boot/efi/grub/x86_64-efi/bli.mod

and running "proxmox-boot-tool reinit" fixed my boot error. Thanks.
 
Last edited:
I also noticed that after installing via the ISO, dnsmasq did not get installed.
I configured a simple SDN and enabled DHCP but when I clicked apply I got:
WARN: please install the 'dnsmasq' package in order to use the DHCP feature!
TASK ERROR: Could not run before_regenerate for DHCP plugin dnsmasq cannot reload with missing 'dnsmasq' package

And in previous ISO versions dnsmasq was also installed since the SDN was introduced.
 
I upgraded 6 different hosts I have. Every one of them was smooth. A few VMs didn't start because it said "TASK ERROR: KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS.", so I had to uncheck the option for "KVM hardware virtualization".

On those exact same VMs, the network was also not working with:

"Error: Unknown device type.
can't create interface fwln101i0 - command '/sbin/ip link add name fwln101i0 mtu 1500 type veth peer name fwpr101p0 mtu 1500' failed: exit code 2

kvm: -netdev type=tap,id=net0,ifname=tap101i0,script=/usr/libexec/qemu-server/pve-bridge,downscript=/usr/libexec/qemu-server/pve-bridgedown: network script /usr/libexec/qemu-server/pve-bridge failed with status 512
TASK ERROR: start failed: QEMU exited with code 1"

My fix there was to copy the mac address of the existing network. Delete the network card from the hardware and add a new one back with the same mac address.

That seems to work fine.
 
OH, I should also mention, I do have 1 problem. A windows server VM will not start after the upgrade. Any ideas?

Server 2025. 8 Cores, 8GB Ram. UEFI.

It says Start Boot Option with the progress bar and it shows the Proxmox logo, then nothing. Just a black screen forever. No errors or anything.

If I reset it enough times it shows the windows "preparing automatic repair" then that goes away and I'm back to a black screen.

HMM, I just tried booting off the windows installer ISO and it comes up with an empty black screen too.

EDIT: I made a new clean VM and tried to boot that and it immediately complains about the network like my previous post. On a fresh VM? Something is wrong. I deleted the network card from hardware and this one doesn't boot either. Black screen. Something is wrong somewhere. What do I check?

EDIT2: I just backed up the VM, blew away the whole PVE install on this server and reinstalled a fresh 9.0.3. Then restored the VM from backup, and it works fine.
 
Last edited:
The thing is, I rebooted immediately after it finished upgrading. So have not removed systemd-boot and since I have rebooted, my window to remove it is now gone.

The node reboots and restarts fine and all VM's are running normally but

When I run pve8to9 (now, post install and reboot) I get this:

INFO: Checking bootloader configuration...
INFO: systemd-boot used as bootloader and fitting meta-package installed.

And when I run Proxmox-boot-tools status:

root@pve:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.


You might be the first person I've seen with that error.

Maybe make a separate thread to keep the problem description and debug info from getting lost?

Almost everyone else here who had bootloader problems just wound up with a non-booting system. This seems different.
 
  • Like
Reactions: Johannes S
@t.lamprecht

Code:
./find-old-packages2.sh
cpufrequtils
libaio1:amd64
libapt-pkg6.0:amd64
libassuan0:amd64
libcbor0.8:amd64
libcpufreq0
libflac12:amd64
libfmt9:amd64
libfuse3-3:amd64
libglib2.0-0:amd64
libglusterd0:amd64
libicu72:amd64
libldap-2.5-0:amd64
libmagic1:amd64
libpcre3:amd64
libperl5.36:amd64
libpython3.11-minimal:amd64
libpython3.11-stdlib:amd64
libqt5core5a:amd64
libsubid4:amd64
libthrift-0.17.0:amd64
libunistring2:amd64
mime-support
perl-modules-5.36
proxmox-kernel-6.14.8-2-bpo12-pve-signed
python3.11
python3.11-minimal
python3-pysimplesoap
spl

Script:
Bash:
#!/usr/bin/env bash
# list-not-from-allowed-suites.sh
# Flags installed pkgs that are NOT provided by allowed suites.
# ALLOWED is a regex on the "n=" (suite) field and the suite token in the path.
# Default covers trixie plus trixie-updates/security/etc.
set -euo pipefail
ALLOWED="${ALLOWED:-trixie(|-[a-z0-9]+)*|pve|proxmox}"

dpkg-query -W -f='${binary:Package}\n' | sort -u | while read -r pkg; do
  out="$(apt-cache policy "$pkg" 2>/dev/null || true)"

  # Any repo entries at all?
  if ! grep -qE '^[[:space:]]+[0-9]+[[:space:]]+(http|https|file|cdrom):' <<<"$out"; then
    echo "$pkg"            # orphaned/local (only /var/lib/dpkg/status)
    continue
  fi

  # Accept if any release line says n=<allowed>
  if grep -qE "^[[:space:]]+release .*n=($ALLOWED)([ ,]|$)" <<<"$out"; then
    continue
  fi

  # Fallback: accept if the path token is <allowed>/<component>
  if grep -qE "^[[:space:]]+[0-9]+[[:space:]]+(http|https|file|cdrom):\S+[[:space:]]+($ALLOWED)(/|[[:space:]])" <<<"$out"; then
    continue
  fi

  # Otherwise, flag it
  echo "$pkg"
done

Can we remove those old packages from bookworm?
i think there is a lot more depricated stuff, but those are the ones i was able to find fast.

A Cleanup script would be cool either, for such big upgrades like 8 to 9 etc... :-)

Thanks & Cheers

EDIT: another question:
pve8to9:
WARN: systemd-boot meta-package installed this will cause issues on upgrades of boot-related packages. Install 'systemd-boot-efi' and 'systemd-boot-tools' explicitly and remove 'systemd-boot'
pbs3to4:
WARN: proxmox-boot-tool is used for bootloader configuration in uefi mode but the separate systemd-boot package, is not installed.
initializing new ESPs will not work until the package is installed.

Its a pve+pbs server, the messages are a bit confusing..., i installed systemd-boot-efi + systemd-boot-tools "manually" and removed systemd-boot
the pbs message is probably forgotten to be updated?
 
Last edited:
[solved]

I should have read https://pve.proxmox.com/wiki/Upgrade_from_8_to_9 till the end before posting here. => "If guest volumes are only on local LVM or LVM-thin storages, running the migration script is optional." and "Such LVs are not automatically activated anymore, and are instead activated by Proxmox VE when needed." - which indeed keeps questions marks remaining (what is 'such' (as it could be reflexive in several ways in that paragraph) and when are they needed in this respect?) but at least this is an appropriate answer to my question below.


[original request]

Thanks a lot for publishing PVE 9 - what a great step!

I run the pve8to9 --full which shows up with the known notice:

NOTICE: storage 'vg0' has guest volumes with autoactivation enabled
NOTICE: Starting with PVE 9, autoactivation will be disabled for new LVM/LVM-thin guest volumes. This system has some volumes that still have autoactivation enabled. All volumes with autoactivations reside on local storage, where this normally does not cause any issues.

I run /usr/share/pve-manager/migrations/pve-lvm-disable-autoactivation just to check the output which confirms with its question Disable autoactivation on 4 guest volumes on storage 'vg0'? and check afterwards that all my CT volumes vg0/vm-1001-disk-0 through vg0/vm-1004-disk-0 CT are treated as relevant guest volumes. This confuses me a little bit as I run my PVE with just 1 node and local volumes (no cluster, no shared LVM), so I did not expected to fall under #4997.

Is there any need to disable autoactivation nevertheless? Or did I misconfigured my environment, for I am treated like being in a clustered environment - and this needs to or should be fixed? Or did I just misunderstood the concept of local versus shared volumes?

Appreciate any hints or pushes in a proper direction. Thanks!


Code:
root@kallisto ~ # pvesm status
Name           Type     Status           Total            Used       Available        %
backup1         dir     active        25627028        13601168        10766372   53.07%
local           dir     active        25627028        13601168        10766372   53.07%
vg0             lvm     active       467648512       439353344        28295168   93.95%


root@kallisto ~ # pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               vg0
  PV Size               <445,99 GiB / not usable 4,00 MiB
  Allocatable           yes
  PE Size               4,00 MiB
  Total PE              114172
  Free PE               6908
  Allocated PE          107264
  PV UUID               rRffYy-5eWM-21VA-5n3B-Kqat-ImRt-WKqWRA


root@kallisto ~ # vgdisplay
  --- Volume group ---
  VG Name               vg0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                7
  Open LV               6
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               445,98 GiB
  PE Size               4,00 MiB
  Total PE              114172
  Alloc PE / Size       107264 / 419,00 GiB
  Free  PE / Size       6908 / 26,98 GiB
  VG UUID               uP8X1k-gtVf-HHOL-e06S-AiPD-jWLa-0ngyvv

root@kallisto ~ # lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg0/root
  LV Name                root
  VG Name                vg0
  LV UUID                1U1edU-SY9x-k3tW-QVHn-odMt-dWa0-TwHeMa
  LV Write Access        read/write
  LV Creation host, time rescue, 2023-11-12 18:36:22 +0100
  LV Status              available
  # open                 1
  LV Size                25,00 GiB
  Current LE             6400
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

  --- Logical volume ---
  LV Path                /dev/vg0/swap
  LV Name                swap
  VG Name                vg0
  LV UUID                UQ8r5o-tDFL-zwNd-nBvi-VWVH-K2wV-IkrGpX
  LV Write Access        read/write
  LV Creation host, time rescue, 2023-11-12 18:36:23 +0100
  LV Status              available
  # open                 2
  LV Size                6,00 GiB
  Current LE             1536
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1

  --- Logical volume ---
  LV Path                /dev/vg0/backup
  LV Name                backup
  VG Name                vg0
  LV UUID                P0bPrW-AxQX-EyMM-dNr5-rJeW-6eHm-obWMsr
  LV Write Access        read/write
  LV Creation host, time kallisto, 2023-11-12 19:20:04 +0100
  LV Status              available
  # open                 0
  LV Size                150,00 GiB
  Current LE             38400
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2

  --- Logical volume ---
  LV Path                /dev/vg0/vm-1001-disk-0
  LV Name                vm-1001-disk-0
  VG Name                vg0
  LV UUID                NKc7Ki-Nz2F-1QgF-xFjS-LY9n-ffos-L1cVhm
  LV Write Access        read/write
  LV Creation host, time kallisto, 2023-11-13 02:23:18 +0100
  LV Status              available
  # open                 1
  LV Size                30,00 GiB
  Current LE             7680
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:3

  --- Logical volume ---
  LV Path                /dev/vg0/vm-1003-disk-0
  LV Name                vm-1003-disk-0
  VG Name                vg0
  LV UUID                zreyne-FSIB-zyMf-mZxa-EB74-NNmd-ebuhww
  LV Write Access        read/write
  LV Creation host, time kallisto, 2023-11-18 03:06:46 +0100
  LV Status              available
  # open                 1
  LV Size                100,00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:4

  --- Logical volume ---
  LV Path                /dev/vg0/vm-1004-disk-0
  LV Name                vm-1004-disk-0
  VG Name                vg0
  LV UUID                wRMkJL-uo8Z-K0O7-2TkC-mRl1-ugaI-tlO3ye
  LV Write Access        read/write
  LV Creation host, time kallisto, 2023-11-18 03:12:42 +0100
  LV Status              available
  # open                 1
  LV Size                8,00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:5

  --- Logical volume ---
  LV Path                /dev/vg0/vm-1002-disk-0
  LV Name                vm-1002-disk-0
  VG Name                vg0
  LV UUID                YMMvsN-g59Y-L9HD-ZP5s-hQn8-NqFA-sgTXGa
  LV Write Access        read/write
  LV Creation host, time kallisto, 2023-12-11 01:37:08 +0100
  LV Status              available
  # open                 1
  LV Size                100,00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:6
 
Last edited:
This disk is dying. Replace it ASAP. I have no idea how to do that safely. What does your underlying PVE storage look like?

I'm only guessing, but the fact that specific VMs in the backup job are failing suggests that there's a specific physical area of the disk that's going bad, and data associated with those jobs lives there.

In the meantime, check your backup destination (is it PBS?) and make sure it doesn't prune your last verified good backup of the impacted VM(s). You'll need to restore from those.

Thanks @SInisterPisces

Regarding the disk being on the way out.. Is this actually the case though? Overall smart status passes, while the self test fails on at least one block (which I understand should be flagged and reallocated, but requires some effort on my part to make that happen).. I take your point though, I have PBS backups and will ensure they aren't pruned before I need them..


That said.. I can't help think there is potentially something else amiss, the timing just seems too coincidental
  1. The backup errors only started occurring after upgrading PVE 8 to 9 (PBS backups run nightly, so.. ever since the upgrade)
  2. Those errors occur immediately on attempted backup, against 4 different templates but all in the same way (The device is not writable: Permission denied) while mounting the storage, rather than at some abritrary block during a read op
  3. The smartd warning didn't present until 3 days *after* upgrading, and *may* be unrelated
  4. Cloning the affected templates (full clone) to new VM's works without issue, indicating the storage for those templates may not be at issue (right?)
  5. I am able to restore all 4 of those templates from previous PBS backups (prior to the upgrade) without issue
  6. Subsequent backup attempts (following the restore) for those templates continue to fail in the same way

I just want to rule out that this isn't still a proxmox issue, independently of any potential drive issue.. (I guess the only way to really be certain of that is with a known-good drive + restored backups, though my gut tells me the issue will persist)
 
Last edited:
Followed the directions carefully and successfully updated from PVE 8.4.9 to 9.0.3!

No huge surprises, everything was covered in the wiki or via AI guidance.

AMD Ryzen 7 5700X on x570 chipset
 
Upgraded as well using the official documentation from 8.4.9 to 9.0.3 without any issues. Veeam had some issues, but then again they haven't updated their software to officially support the new release.

With thick LVM:s, do i need to change storage.cfg and add snapshot-as-volume-chain flag to already created LVM volumes? It also mentions that "
Enabling or disabling this flag only affects newly created virtual disk volumes.", would cloning a VM count as a newly created disk to be able to enable snapshots on already created VMs?
 
have upgraded both of my Proxmox VE (PVE) nodes to version 9. During the process, I received a warning about the systemd-boot meta-package, but I was unaware that it needed to be removed before rebooting. After the upgrade, running the pve8to9 script no longer displays the systemd-boot meta-package warning. However, I’m concerned that not removing the package prior to rebooting could cause issues with future upgrades, potentially risking system stability.

The current output of the pve8to9 script is:
INFO: systemd-boot used as bootloader and fitting meta-package installed.

Is there a way to remedy this?

Both nodes are running fine and humming along (I think)
The check was adapted in the mean time to deal with that situation:
https://lore.proxmox.com/all/175466070754.625540.12444949078147201723.b4-ty@proxmox.com/T/#u
(the current recommendation is to remove the systemd-boot meta-package after upgrading - and ensuring that `efibootmgr -v` points to a sensible boot-loader (proxmox/grubx64.efi, or proxmox/shimx64.efi, for systems which are setup with EXT4/XFS and LVM-Thin and booting with uefi).

I hope this helps!
 
I also noticed that after installing via the ISO, dnsmasq did not get installed.
I configured a simple SDN and enabled DHCP but when I clicked apply I got:
WARN: please install the 'dnsmasq' package in order to use the DHCP feature!
TASK ERROR: Could not run before_regenerate for DHCP plugin dnsmasq cannot reload with missing 'dnsmasq' package

And in previous ISO versions dnsmasq was also installed since the SDN was introduced.

Afaik dnsmasq was never shipped via our ISO, it's also explicitly mentioned in the Docs that it needs to be installed after [1].

[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvesdn_install_dhcp_ipam
 
Again, for all the people who had issues with their LVM thin pools and needed to repair them ( @Jedis @Kevo @Damon1974 @thomascgh ):

Thank you for the logs! We identified the issue and it's a missing build flag for Debian's lvm2 package. For details, see: https://lore.proxmox.com/pve-devel/20250808200818.169456-1-f.ebner@proxmox.com/T/

In short, thin pools with certain minor issues won't get auto-repaired and won't auto-activate, unless a certain option is specified. It should be there by default, but the missing build flag leads to the default missing it.

Our plan is to re-build and ship and updated version of the package until the bug is fixed in Debian Trixie itself.

EDIT: the rebuilt lvm2 package 2.03.31-2+pmx1 is available on the pve-test repository now.
 
Last edited:
Just upgraded one node in cluster of 7 nodes. If I login to any proxmox 8 node, status of proxmox 9 node is grey, no cpu and memory usage at summary page, no VM names, everything else works fine. If I login to proxmox 9 node, status of all nodes is green. Restart of pvestatd and upgraded node doesn not help.
 
Hi,
I upgraded 6 different hosts I have. Every one of them was smooth. A few VMs didn't start because it said "TASK ERROR: KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS.", so I had to uncheck the option for "KVM hardware virtualization".
that should not happen if you properly enabled hardware virtualization in the BIOS and if, it should happen for all VMs using hardware virtualization. Please open a new thread pinging me with @fiona and providing more details, like the VM configuration qm config <ID> of an affected and a non-affected VM, replacing <ID> with the actual ID. Please also provide the output of pveversion -v and the system logs/journal from the current boot (or boot you experienced the issue with if you already rebooted), e.g. journalctl -b > /tmp/boot.txt.
 
Just upgraded one node in cluster of 7 nodes. If I login to any proxmox 8 node, status of proxmox 9 node is grey, no cpu and memory usage at summary page, no VM names, everything else works fine. If I login to proxmox 9 node, status of all nodes is green. Restart of pvestatd and upgraded node doesn not help.
Are there any logs for pvestatd in the journal of the PVE 8 hosts?