Proxmox Virtual Environment 9.0 released!

The longer the selected time span, the more visible the difference is. Don't you have more spikes in the CPU graph with MAX?
Nope - identical when I switch between them ?!? Although I only updated to VE9 yesterday - so not much data collected on the new platform
When I look at the "month" view, the peaks move a fraction . . .

The new pressure stall graph data is only collected with PVE9. Prior to that, there won't be anything to graph.
Also, memory stalls are rather rare. One way I could produce them was to run stress-ng with many memory workers. Otherwise, it usually should be fast enough :)
And the network traffic graph might show pretty much nothing (MAX or AVERAGE?) as due to the new resolution, spikes got flattened during the migration. For that we are currently looking into to see if we can't grab the old data from the old RRD files with the lower resolution, where available.

Understood - just looks empty
 
Ok, updates from 8 to 9 as well. Didn't remove systemd-boot before the upgrade, but after. Everything seems to work, but the NFS storage has a problem.

When I go to VM Disks, I get a "mount error: exit code 32 (600)", any idea's?

Albert

Want to mention that this worked fine before the upgrade.

Found these messages in /var/log/syslog and they seem to be related:
2025-08-06T10:41:56.048404+02:00 pve1 pve-firewall[1305]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T10:41:57.179740+02:00 pve1 pvestatd[1310]: mount error: exit code 32

The pve-firewall messages seem to be generated every 10 seconds.
Ok, I (probably stupedly) removed the NFS share, but now I can't add it anymore. The exports are seen, but get a code 32 (500) error now.

Quite annoying, as the VMs are located on the NFS share. And the cluster is useless now.
I have never used the firewall option, so there shouldn't be any configuration files for that. Maybe that's the problem, there is nothing to restore.
 
Followed the directions to upgrade an up to date 8.4 node running a single VM. No obvious issues until the upgrade reached:

/usr/lib/tmpfiles.d/legacy.conf:14: Duplicate line for path "/run/lock", ignoring.
Setting up libpve-access-control (9.0.3) ...
Setting up grub-efi-amd64-bin (2.12-9+pmx2) ...
Setting up pve-ha-manager (5.0.4) ...
watchdog-mux.service is a disabled or a static unit, not starting it.

Progress: [ 98%] [#########################################################################################################...]

At which point it has hung for 10+ minutes. Impervious to ^C.

No active processes apart from kvm (the single VM is still lightly active). Seems to be trying to restart pve-ha-manager, the innermost processes under dist-upgrade, according to htop, is:

/usr/bin/systemd-tty-ask-password-agent --watch

Running pve8to9 in new ssh session immediately hangs in D state, immune to ^C or ^\.

pvestatd and pvescheduler processes are also stuck in D state.

In the web UI running on a different node, all nodes have gone grey except the node the UI is running on. This is a 4 node cluster currently fully quorate. Attempts to connect to the upgrading node's proxmox web UI fail.

Any ideas before I force a reboot? Is there a recommended way of continuing the upgrade after a forced reboot? All hints gratefully accepted.

All if all else fails I can nuke the node are restore the VM, but would rather not as I'll lose several hours of data collection in the VM.
 
Followed the directions to upgrade an up to date 8.4 node running a single VM. No obvious issues until the upgrade reached:

/usr/lib/tmpfiles.d/legacy.conf:14: Duplicate line for path "/run/lock", ignoring.
Setting up libpve-access-control (9.0.3) ...
Setting up grub-efi-amd64-bin (2.12-9+pmx2) ...
Setting up pve-ha-manager (5.0.4) ...
watchdog-mux.service is a disabled or a static unit, not starting it.

Progress: [ 98%] [#########################################################################################################...]

At which point it has hung for 10+ minutes. Impervious to ^C.

No active processes apart from kvm (the single VM is still lightly active). Seems to be trying to restart pve-ha-manager, the innermost processes under dist-upgrade, according to htop, is:

/usr/bin/systemd-tty-ask-password-agent --watch

Running pve8to9 in new ssh session immediately hangs in D state, immune to ^C or ^\.

pvestatd and pvescheduler processes are also stuck in D state.

In the web UI running on a different node, all nodes have gone grey except the node the UI is running on. This is a 4 node cluster currently fully quorate. Attempts to connect to the upgrading node's proxmox web UI fail.

Any ideas before I force a reboot? Is there a recommended way of continuing the upgrade after a forced reboot? All hints gratefully accepted.

All if all else fails I can nuke the node are restore the VM, but would rather not as I'll lose several hours of data collection in the VM.

could you please check "ps faxl" output to see what the process tree below apt looks like? do you have the "udisks2" package installed?
 
Hi Fabian, thanks for the quick reply.

The dist-upgrade responded to the ^C I made after about an hour. Unfortunately I do not have record of the line that followed as ps faxl cleared the terminal history.

Looks like the pve8to9 running in the other ssh to the upgrading node also responded to ^C about the same time. The processes in D state are no longer stuck in D.

Other nodes in the web UI are still flagged with grey question mark icons (other than for the node the UI is on, which is still on 8.4.8) but I can actually interact with all the nodes and their LXC/VMs. I was able to make a PBS backup of the single VM on the upgrading node (it is using passthrough data logging hardware so can't just be thrown to another node).

pvecm status continues to show a clean, quorate cluster, whichever node I run it on.

Should I just re-run dist-upgrade and cross my fingers? Happy to leave it for a while if you want me to dig for other info.
 
Should I just re-run dist-upgrade and cross my fingers? Happy to leave it for a while if you want me to dig for other info.
yes please!

also, is udisks2 installed on this system?
 
Hi @fabian.

Ran dist-upgrade, completed immediately with no further actions. Followed by an autoremove which had the useful side effect of running update-initramfs so that element should be good to go.

udisks2 is not installed on any of the nodes.

Upgraded node rebooted via web UI, seems fine: os-release shows trixie, web UI is showing 9.0.3, VM came back up.
 
thanks! sounds like your system is good now. could you maybe post the full /var/log/apt/term.log for the hanging upgrade? maybe that gives us some pointers what went wrong there..
 
Just upgraded 2 proxmox nodes
- one was running OPNsense with NIC WAN passtrough - succes
- was running prometheus and some exporters and grafana. Grafana had trouble displaying metrics on timeframes from before the upgrade so i nuked the database instead of fixing/troubleshooting - medium succes
- plex on i3 - n355 now finally sees the Alder Lake N iGPU for hardware transcoding because of the newer kernal - Huge succes

Did all of this while keeping all my VM/LXC's running so the only downtime for anything was only during reboot.
 
Great job Proxmox team. Upgraded my test host without issues!
One thing to report though, love the new mobile-friendly UI, it loads and work just fine in Chrome, but when using Firefox it loads the classic UI.
(Pixel 9 Pro XL, Android 16, Firefox 141.0.1)
 
In legacy boot there is a little typo in the pve8to9 script.
Code:
INFO: Checking bootloader configuration...
INFO: systemd-boot package installed on legacy-boot system is not necessary, consider remoing it
Just for information.
 
  • Like
Reactions: Stoiko Ivanov
  • Like
Reactions: JensF
Lost network after update
Found error in logs like:
rawconfigparser object has no attribute readfp

If I try to activate network interface get same error.
 
Lost network after update
Found error in logs like:
rawconfigparser object has no attribute readfp

If I try to activate network interface get same error.

Whats the output of the following commands?

Code:
pveversion -v
ifreload -avd
tail -n+1 /etc/network/interfaces /etc/network/interfaces.d/*
 
Whats the output of the following commands?

Code:
pveversion -v
ifreload -avd
tail -n+1 /etc/network/interfaces /etc/network/interfaces.d/*
pveversion -v
command not found

ifreload -avd
error: main exception: 'RawConfigParser' object has no attribute "readfp'

Looks like no more PVE :)

Possible easy to reinstall all
 
Hi,
It's interesting that the memory usage is 101.15% on the pfSense (2.8.0) VM in PVE 9.
On PVE 8, the memory usage was around 70-75%. That was also incorrect, but not over 100%.

The memory balloon is off due to a missing implementation in FreeBSD to calculate the memory usage in QEMU.
Source: https://github.com/aborche/qemu-guest-agent/issues/19
I'm not aware of changes in this area. The fallback is for the memory consumption for the whole process cgroup associated to the VM on the host, so that can be more than the total assigned memory for the guest:
https://git.proxmox.com/?p=qemu-ser...c094a357bc937ed92708c07a2908289ab1580e3#l2711

EDIT: there were actually changes to the fallback, I was not aware of those, see @aaron's response: https://forum.proxmox.com/threads/proxmox-virtual-environment-9-0-released.169258/post-788983
 
Last edited:
Ok, I (probably stupedly) removed the NFS share, but now I can't add it anymore. The exports are seen, but get a code 32 (500) error now.

Quite annoying, as the VMs are located on the NFS share. And the cluster is useless now.
I have never used the firewall option, so there shouldn't be any configuration files for that. Maybe that's the problem, there is nothing to restore.
Is there any news/fix for this issue? In short, after the update NFS storage mounts don't work anymore, see my previous messages.
 
Excuse me, how can I obtain dhclinisc-dhcp-clientet on the newly installed PVE 9.0? When I configured the network dhcp, it told me “([Errno 2] No such file or directory: ‘/sbin/dhclient’)”. I found an `isc-dhcp-client`, but it didn't exist when I tried to install it. I found that `/sbin/dhcpcd` exists. Running `apt search dhclient` returns dhcpcd-base.