Proxmox VE 9.0 BETA released!

After upgrade to 9.0 i get an error runing pveam update.

pveam update
update failed - see /var/log/pveam.log for details
the log shows
2025-07-19 21:03:53 starting update
2025-07-19 21:03:53 start download http://download.proxmox.com/images/aplinfo-pve-9.dat.asc
2025-07-19 21:03:53 download finished: 200 OK
2025-07-19 21:03:53 start download http://download.proxmox.com/images/aplinfo-pve-9.dat.gz
2025-07-19 21:03:53 download finished: 200 OK
2025-07-19 21:03:53 signature verification: gpgv: Signature made Tue Jun 17 21:59:18 2025 CEST
2025-07-19 21:03:53 signature verification: gpgv: using RSA key 24B30F06Exxxxxxxxxxxxxxxxxxxxxxxxxxxx
2025-07-19 21:03:53 signature verification: gpgv: [don't know]: invalid packet (ctb=2d)
2025-07-19 21:03:53 signature verification: gpgv: keydb_search failed: Invalid packet
2025-07-19 21:03:53 signature verification: gpgv: [don't know]: invalid packet (ctb=2d)
2025-07-19 21:03:53 signature verification: gpgv: keydb_search failed: Invalid packet
2025-07-19 21:03:53 signature verification: gpgv: Can't check signature: No public key
2025-07-19 21:03:53 unable to verify signature - command '/usr/bin/gpgv -q --keyring /usr/share/doc/pve-manager/trustedkeys.gpg /var/lib/pve-manager/apl-info/pveam-download.proxmox.com.tmp.469843.asc /var/lib/pve-manager/apl-info/pveam-download.proxmox.com.tmp.469843' failed: exit code 2
everything else seems to work ok.
 
Last edited:
  • Like
Reactions: Johannes S
However
GUI -> VM Summary -> now shows
IPs: Requires 'VM.Monitor' Privileges
for all my VMs that reported OK in pve8
IIRC a colleague noticed that during internal testing before the release and the fix still made it in before we anouced the beta.
I just re-echecked and found no references to VM.Monitor in the code anymore, might be some other issue, but can you please post the output of pveversion -v so that we can ensure it's not something old left over on your system?
 
  • Like
Reactions: Johannes S
Congratulations on the release!

Had a couple of questions after looking at the release notes/roadmap:

Support for snapshots as volume chains on Directory/NFS/CIFS storages (technology preview). Support for snapshots as volume chains can be enabled when creating a new Directory/NFS/CIFS storage.With this setting, taking a VM snapshot persists the current virtual disk state under the snapshot's name and starts a new qcow2 file based based on the snapshot.Snapshots as volume chains may yield better performance and reduce VM downtime during snapshot deletion in comparison to qcow2 snapshots.
Am I understanding correctly that this will be an alternative to native QCOW2 snapshots when storing QCOW2-based VM disks on NFS/CIFS/Directory?

proxmox-network-interface-pinning is a tool for assigning permanent "pinned" names to network interfaces.
I'm guessing that this isn't exposed in the GUI and works more like the Proxmox kernel management tool?

Install the microcode package matching the current platform. This ensures that new Proxmox VE installations get available fixes for CPU security issues and other CPU bugs.Note that to get microcode updates that were released after the ISO was built, hosts have to be updated regularly. This also means that installations now have the non-free-firmware repository enabled.
Really glad to see this. I know I was reluctant to turn this on when it wasn't on by default; I'm sure others were as well. Now that it's default, it'll end up on a lot more nodes.

Will the microcode package get installed when upgrading from PVE 8?

Proxmox VE 9 can now transparently handle many network name changes.

These changes may occur when upgrading from Proxmox VE 8.x to Proxmox VE 9.0 due to new naming scheme policies or the added support for new NIC features. For example, this may happen when upgrading from Kernel 6.8 to Kernel 6.14.If the previous primary name remains available as an alternative name, manual intervention may not be necessary since Proxmox VE 9.0 allows the use of alternative names in network configurations and firewall rules.

However, in some cases, the previous primary name might not be available as an alternative name after the upgrade. In such cases, manual reconfiguration after the upgrade is currently still necessary, but this may change during the beta phase.
This is another great quality of life improvement. :) It sounds like that even if everything seems to be working, we should probably check the autoconfiguration on install/upgrade just to avoid any surprises later.

VirtIO vNICs: Changed default for MTU field​

Leaving the MTU field of a VirtIO vNIC unset now causes the vNIC to inherit the bridge MTU. Previously, the MTU would default to MTU 1500. The pve8to9 checklist scripts will detect vNICs where the MTU would change after upgrade. If you want affected vNICs to keep using MTU 1500, you need to manually configure MTU 1500 before upgrade.

I still routinely forget to adjust the VirtIO vNIC MTU for my 10 Gbps NICs, even after years. I'm really glad to see this. :)
 
might be some other issue, but can you please post the output of pveversion -v so that we can ensure it's not something old left over on your system?
Thanks for looking into it.
I restarted the hardware and left it running over night, and now it works.

Looking for something else, this system had this error prior to v8.4 -> v9.0 upgrade which maybe the error & so unrelated to this thread. Sorry if that's the case
Code:
root@pve3:/# apt-get dist-upgrade        
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Setting up grub-pc (2.12-9+pmx2) ...
dpkg: error processing package grub-pc (--configure):
 installed grub-pc package post-installation script subprocess returned error exit status 20
Errors were encountered while processing:
 grub-pc
E: Sub-process /usr/bin/dpkg returned an error code (1)

please post the output of pveversion -v

Code:
root@pve3:~# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-1-pve)
pve-manager: 9.0.0~10 (running version: 9.0.0~10/0fef50945ccd3b7e)
proxmox-kernel-helper: 9.0.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.14.8-1-pve-signed: 6.14.8-1
proxmox-kernel-6.14: 6.14.8-1
proxmox-kernel-6.14.8-1-bpo12-pve-signed: 6.14.8-1~bpo12+1
proxmox-kernel-6.8.12-12-pve-signed: 6.8.12-12
proxmox-kernel-6.8: 6.8.12-12
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 19.2.2-pve2
corosync: 3.1.9-pve2
criu: 4.1-1
ifupdown2: 3.3.0-1+pmx7
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.2
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.2
libpve-cluster-perl: 9.0.2
libpve-common-perl: 9.0.6
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.1
libpve-network-perl: 1.1.0
libpve-rs-perl: 0.10.4
libpve-storage-perl: 9.0.6
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.3-1
proxmox-backup-file-restore: 4.0.3-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.0.0
proxmox-kernel-helper: 9.0.0
proxmox-mail-forward: 1.0.1
proxmox-mini-journalreader: 1.6
proxmox-widget-toolkit: 5.0.2
pve-cluster: 9.0.2
pve-container: 6.0.2
pve-docs: 9.0.4
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.0
pve-firewall: 6.0.2
pve-firmware: 3.16-3
pve-ha-manager: 5.0.1
pve-i18n: 3.5.0
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.4
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1
root@pve3:~#
 
Last edited:

Ill add my forum-post to this list:

 
Kernel is broken with Mellanox 100G Connectx-5 VF.

kvm: -device vfio-pci,host=0000:81:00.1,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:81:00.1: error getting device from group 89: Permission denied
Verify all devices in group 89 are bound to vfio-<bus> or pci-stub and not already in use

Everything works fine with kernel: proxmox-kernel-6.16.0-6-pve_6.16.0-6_amd64.deb.

Please fix it before final release.
 
Am I understanding correctly that this will be an alternative to native QCOW2 snapshots when storing QCOW2-based VM disks on NFS/CIFS/Directory?
Yes, it can be also used for those storages, but there one must enable it on storage creation time (i.e., when adding it as new storage in PVE), this is because these storages already supported qcow2 format before and were rather flexible with accepted names that one could manually choose, so we cannot simply allow turning it one for existing storage config entries to avoid ambiguity.

But when it gets enabled–the checkbox is already there in the UI–the snapshots whill be created as separate volumes, which can avoid some disadvantages of the in-format qcow2 snapshots, like the performance not dropping temporarily a bit on e.g. NFS.
FWIW, we focused a bit more of our testing on the LVM side, there might a bit more edge cases left for the directory based plugins, and it's definitively less of a pain point compared to the lack of simple vendor-agnostic snapshot support for volumes on SANs.

I'm guessing that this isn't exposed in the GUI and works more like the Proxmox kernel management tool?
Yes, you're right, at the moment this is CLI only. Interface name pinning is normally something that admins won't frequently change, and something that also needs to adapt lots of configs and needs a reboot to fully activated automatically, so doing that via CLI for now is IMO the better options. Integration into installer is something we actively look into and API/UI integration might be a possibility in the future, but for the short term it will be on the CLI.

Really glad to see this. I know I was reluctant to turn this on when it wasn't on by default; I'm sure others were as well. Now that it's default, it'll end up on a lot more nodes.
Good to hear, we actually pondered this for the previous major release, but as it was rather close to our planned release date back then and not blocking anybody from installing it their selves, so we postponed it then, now with the beta it felt like a good time to revisit this.
Will the microcode package get installed when upgrading from PVE 8?
No, we cannot automatically determine which package to install (AMDs or Intels) just by dependencies, and while they nowadays can co-exist, I'm a bit wary of just installing them both. We could add a hint to the upgrade script though, thanks for sparking that idea here.

This is another great quality of life improvement. :) It sounds like that even if everything seems to be working, we should probably check the autoconfiguration on install/upgrade just to avoid any surprises later.
Yes, and some renames just cannot be covered by alternative names as they might change too, so it doesn't cover every edge case.
That's why pinning once, or on installation is the best bet for long term name stability.
I still routinely forget to adjust the VirtIO vNIC MTU for my 10 Gbps NICs, even after years. I'm really glad to see this. :)
Thx for your feedback!
 
Looking for something else, this system had this error prior to v8.4 -> v9.0 upgrade which maybe the error & so unrelated to this thread. Sorry if that's the case
Should not matter for this error, another just to be sure: you did force-reload the web UI after the update? Like CTRL + SHIFT + R or cleared the caches otherwise?
 
Kernel is broken with Mellanox 100G Connectx-5 VF.

From the error below I assume you mean passing through that NIC to a VM is broken, not using the NIC on the host itself?

kvm: -device vfio-pci,host=0000:81:00.1,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:81:00.1: error getting device from group 89: Permission denied
Verify all devices in group 89 are bound to vfio-<bus> or pci-stub and not already in use

Please open a new thread and post at least the VM config the output of journalctl -b 0 | grep -i iommu and mention @dcsapak

Everything works fine with kernel: proxmox-kernel-6.16.0-6-pve_6.16.0-6_amd64.deb.

What do you mean with above kernel? The 6.16 kernel ins't even released by Torvalds for Linux upstream...
 
Last edited:
  • Like
Reactions: Johannes S
Thanks for the detailed reply, @t.lamprecht . I was definitely confused about a couple of things there. :)
No, we cannot automatically determine which package to install (AMDs or Intels) just by dependencies, and while they nowadays can co-exist, I'm a bit wary of just installing them both. We could add a hint to the upgrade script though, thanks for sparking that idea here.

A hint would be great. I was unaware that the microcode packages existed for a long time, and the documentation on how to install them was a bit intimidating and confusing at first, since it involved adding the non-free repos google was surfacing some older how-to articles above the freshest Proxmox documentation.

Could the hint could include a link the current PVE documentation on setting up the microcode package(s)?
 
  • Like
Reactions: Johannes S
We do not have a torrent file published as we normally only do that for stable releases, but you can use the following magnet link:

Code:
magnet:?xt=urn:btih:15722cc3e0da53c180be9c99d86717a665e073e0&dn=proxmox-ve%5F9.0-BETA-1.iso&tr=http%3A%2F%2Ftorrent.cdn.proxmox.com%3A6969%2Fannounce

Looks like the torrent is dead. Happy to add seeding to it, but would need *someone* to have the complete torrent first.

Adding a web source doesn't help as the metadata isn't complete atm.
 
Looks like the torrent is dead. Happy to add seeding to it, but would need *someone* to have the complete torrent first.

Adding a web source doesn't help as the metadata isn't complete atm.
I tried the magnet link, and it seems alright here (my test was to just copy it to clipboard and pasted into ktorrent).

FWIW, we also got an initial seeder daemon running on a dedicated host, that is online too. But I'm not that used to producing magnet links nor did I test wide compatibility with torrent clients but only ktorrent, so instead of doing all that I now attached a plain torrent file here to this reply.
 

Attachments

Last edited:
  • Like
Reactions: CRCinAU
Would be nice to have beta documentation too :)
There's some documentation included with the beta release, if you find anything missing - you can mention me here in the forums in a new thread or open a Bug.

Find odd the election of OSPF, when the most usual setup in the data center is BGP based and EVPN is already heavily used in PVE SDN.
Mid-term we have plans to add / move the functionality of the current IS-IS and BGP controllers to this interface as well and improve the state of the current EVPN controller/zone UI. Openfabric / OSPF are popular choices for full-mesh networks / smaller networks (from feedback I gathered), so it's not only intended as EVPN underlay network, but has multiple use-cases.
 
  • Like
Reactions: Johannes S
On setups with modified chrony.conf it also brings a question on upgrade from 8to9. Maybe add this to the list on the wiki, otherwise manually set timeserver gets lost -> errors could happen in corosync or ceph.
 
After upgrading to the beta, I get an error on all my vms regarding cloudinit + also for my ct, that are on a lvm-thin pool. all vms and ct cant start:

Code:
TASK ERROR: activating LV 'pve/vm-103-cloudinit' failed:   Check of pool pve/data failed (status:64). Manual repair required!
TASK ERROR: activating LV 'pve/data' failed:   Check of pool pve/data failed (status:64). Manual repair required!
TASK ERROR: activating LV 'pve/vm-103-cloudinit' failed:   Check of pool pve/data failed (status:64). Manual repair required!

Info: I did run the migration script before, although Im on a local lvm-pool.
Can you add an information on howto fix this? (Although it seems like a bug and should not happen?)

Code:
root@pveneo:~# lvs
  LV                VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  base-104-disk-0   pve Vri---tz-k    4.00m data                                          
  base-104-disk-1   pve Vri---tz-k   32.00g data                                          
  base-104-disk-2   pve Vri---tz-k   32.00g data                                          
  base-9001-disk-0  pve Vri---tz-k   12.00g data                                          
  data              pve twi---tz-- <319.61g                                              
  root              pve -wi-ao----   96.00g                                              
  swap              pve -wi-ao----    8.00g                                              
  vm-101-cloudinit  pve Vwi---tz--    4.00m data                                          
  vm-101-disk-0     pve Vwi---tz--  125.00g data                                          
  vm-102-cloudinit  pve Vwi---tz--    4.00m data                                          
  vm-102-disk-0     pve Vwi---tz--   60.00g data                                          
  vm-103-cloudinit  pve Vwi---tz--    4.00m data                                          
  vm-103-disk-0     pve Vwi---tz--   12.00g data                                          
  vm-105-disk-0     pve Vwi---tz--   62.00g data                                          
  vm-111-cloudinit  pve Vwi---tz--    4.00m data                                          
  vm-111-disk-0     pve Vwi---tz--   60.00g data                                          
  vm-9001-cloudinit pve Vwi---tz--    4.00m data                                          
root@pveneo:~# pvs
  PV         VG  Fmt  Attr PSize   PFree
  /dev/sda3  pve lvm2 a--  446.12g 16.00g
root@pveneo:~# vgs
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1  17   0 wz--n- 446.12g 16.00g

Edit: Fix via thin_check_options = [ "-q", "--skip-mappings" ] see link below.

I fixed this with https://forum.proxmox.com/threads/l...rnel-update-on-pve-7.97406/page-2#post-430860 seems like its related to having lvm-thin pool overprovisioned.
Good that you were able to fix this! I ran a quick upgrade test with an overprovisioned LVM-thin with and without running the migration script pre-upgrade, and didn't see this issue. Could it be the case that the custom thin_check_options had already been set in /etc/lvm/lvm.conf before the upgrade, and this then got lost during the upgrade (because lvm.conf was overwritten with the config from the package), and this is why you had to set it again post-upgrade?
 
  • Like
Reactions: jsterr