Proxmox Virtual Environment 9.0 released!

Check your /etc/apt/sources.list.
It should end with "main non-free contrib non-free-firmware" all the debian.org deb section.

for example:
deb http://ftp.de.debian.org/debian trixie main non-free contrib non-free-firmware

deb http://ftp.de.debian.org/debian trixie-updates main non-free contrib non-free-firmware

deb http://security.debian.org trixie-security main non-free contrib non-free-firmware
1754447151422.png

this is what i see, i expect this should be added by default so why it is not?

adding a new line will help?
 
This release and the beta has an issue with the graphical installer UI not displaying when using IPMI ASPEED 2600 remote video. Just a grey screen and moving cursor. I have to revert to the terminal to install.

View attachment 88889
Hi everyone,
I'm experiencing the same issue with a white screen during installation when using the proxmox-ve_9.1.iso.
With the older ISO proxmox-ve_8.4-1.iso, the installation works without any problems.

My server is an HP ML350 Gen10.
I’ve tried creating the installation USB stick using Rufus, Ventoy, and Balena Etcher, but I always get the same result: a white screen right after the installer starts.

Has anyone solved this issue or has any suggestions?

Thanks in advance!
 
  • Like
Reactions: mikeyo
Updated my 5 node cluster (all mini PCs) - All seems well so far. Two observations on the new dashboard metrics -

1) When switching between Maximum and Average views - nothing happens ?!?
2) Depending on time span, I get NO data in Memory Pressure Stall or Network Traffic ?!?

1754461822845.png1754462136818.png
 
Just a note for I guess a common small home lab server scenario. I have a standalone Proxmox, also running firewall (pfsense), dns (bind), dhcp (kea) as three separate VMs. I stopped all other VMs but kept these running during the upgrade. I had no issues and upgrade is done successfully. Because I have a UPS attached and using nut, in addition to /etc/issue and lvm.conf mentioned in the wiki, I got prompts for config files used by nut also. I did not get any prompts for changes regarding sshd_config, grub and chrony.conf mentioned in the wiki.
 
1) When switching between Maximum and Average views - nothing happens ?!?
The longer the selected time span, the more visible the difference is. Don't you have more spikes in the CPU graph with MAX?

2) Depending on time span, I get NO data in Memory Pressure Stall or Network Traffic ?!?
The new pressure stall graph data is only collected with PVE9. Prior to that, there won't be anything to graph.
Also, memory stalls are rather rare. One way I could produce them was to run stress-ng with many memory workers. Otherwise, it usually should be fast enough :)
And the network traffic graph might show pretty much nothing (MAX or AVERAGE?) as due to the new resolution, spikes got flattened during the migration. For that we are currently looking into to see if we can't grab the old data from the old RRD files with the lower resolution, where available.
 
Last edited:
Thank you for the hard work. Running pve8to9 seems nearly clean in my little cluster, but I got the following warning:

INFO: Checking bootloader configuration...
WARN: systemd-boot meta-package installed but the system does not seem to use it for booting. This can cause problems on upgrades of other boot-related packages. Consider removing 'systemd-boot'


So, can I safely remove systemd-boot or will hell brake loose.

Regards,
Albert
 
My upgrade did break the system. It is not booting anymore. When starting the PC, I end up on the BIOS screen; similar as described here. The system is located on an nvme ssd in a zfs file system. How can I recover?
Hm - we did quite a few fixes in that area - precisely due to the report you linked.
Regarding recovery - I'd boot the 9.0 ISO in debug-mode, and get in the system from the second debug shell (let it boot into the first shell (which on some systems only produces garbled screen-output), hit Ctrl-D))
* there import the pool with altroot `zpool import -R /target rpool` (you probably will need to add a `-f`)
* bind mount proc,sys,... (see https://forum.proxmox.com/threads/boot-issues.57515/post-264943 for some hints regarding which filesystems you should bindmount)
* chroot into the target dir
* run proxmox-boot-tool reinit
that should fix the issue

If not - or if proxmox-boot-tool reports errors - please open a new thread (and mention me with @Stoiko Ivanov ) sharing logs (as text if possible else as screenshots), and a few more details (is the system secure-boot enabled, do you have anything else installed there)

Else - the apt logs (/var/log/apt/term.log, and /var/log/apt/history.log) should cover the upgrade and might help in pinpointing this further.

I hope this helps!
 
Thank you for the hard work. Running pve8to9 seems nearly clean in my little cluster, but I got the following warning:

INFO: Checking bootloader configuration...
WARN: systemd-boot meta-package installed but the system does not seem to use it for booting. This can cause problems on upgrades of other boot-related packages. Consider removing 'systemd-boot'


So, can I safely remove systemd-boot or will hell brake loose.

Regards,
Albert
We tried to make those checks as safe as possible so this should not cause issues.

A bit of background - currently systems:
* having root on ZFS or BTRFS
* booting using UEFI (not legacy bios boot)
* not having secure-boot enabled
use systemd-boot for booting
`proxmox-boot-tool status` should provide some helpful information

Additionally the `systemd-boot` package got split up a bit further in trixie - and proxmox-boot-tool only needs `systemd-boot-tools` and `systemd-boot-efi` - the `systemd-boot` meta-package is currently incompatible (as it tries updating the EFI, despite it not being mounted, which causes an error upon upgrade)

I hope this helps!
 
Just hopping in to say I upgraded from 8.4.1 to 9 and it wiped my efi partition. Unlike ulistermclane, I had a default ext4 lvm partition.
could you open a new thread and provide details and tag me?
 
  • Like
Reactions: Stoiko Ivanov
We tried to make those checks as safe as possible so this should not cause issues.

A bit of background - currently systems:
* having root on ZFS or BTRFS
* booting using UEFI (not legacy bios boot)
* not having secure-boot enabled
use systemd-boot for booting
`proxmox-boot-tool status` should provide some helpful information

Additionally the `systemd-boot` package got split up a bit further in trixie - and proxmox-boot-tool only needs `systemd-boot-tools` and `systemd-boot-efi` - the `systemd-boot` meta-package is currently incompatible (as it tries updating the EFI, despite it not being mounted, which causes an error upon upgrade)

I hope this helps!
I also got the same warning when running pve8to9

When trying to use `proxmox-boot-tool status` I get the following output:
Code:
root@bod-pve-01:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.

What does this mean?
 
Hi everyone,
I'm experiencing the same issue with a white screen during installation when using the proxmox-ve_9.1.iso.
With the older ISO proxmox-ve_8.4-1.iso, the installation works without any problems.

My server is an HP ML350 Gen10.
I’ve tried creating the installation USB stick using Rufus, Ventoy, and Balena Etcher, but I always get the same result: a white screen right after the installer starts.

Has anyone solved this issue or has any suggestions?

Thanks in advance!
Asus Remote KVM, same error.
 

Attachments

  • Screenshot from 2025-08-06 10-04-11.png
    Screenshot from 2025-08-06 10-04-11.png
    90.7 KB · Views: 31
I also got the same warning when running pve8to9

When trying to use `proxmox-boot-tool status` I get the following output:
Code:
root@bod-pve-01:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.

What does this mean?
this means you are not using `proxmox-boot-tool` at the moment. please check the output of `pve8to9`, it should tell you if you need to remove or install packages as part of the upgrade.
 
Strange but on my mac 15.6 either the doc is wrong or I am missing something. Same result if I unmount and with sudo

Code:
xavier@Mac Downloads % ls -l proxmox-ve_9.0-1.iso
-rw-r--r--@ 1 xavier  staff  1641615360 Aug  6 10:10 proxmox-ve_9.0-1.iso


xavier@Mac Downloads % hdiutil convert proxmox-ve_9.0-1.iso -format UDRW proxmox-ve_9.0-1.dmg
hdiutil: convert: only a single input file can be specified
Usage:  hdiutil convert -format <format> -o <outfile> [options] <image>
        hdiutil convert -help

xavier@Mac Downloads % diskutil list
/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *251.0 GB   disk0
   1:             Apple_APFS_ISC Container disk1         524.3 MB   disk0s1
   2:                 Apple_APFS Container disk3         245.1 GB   disk0s2
   3:        Apple_APFS_Recovery Container disk2         5.4 GB     disk0s3

/dev/disk3 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +245.1 GB   disk3
                                 Physical Store disk0s2
   1:                APFS Volume Macintosh HD            11.3 GB    disk3s1
   2:              APFS Snapshot com.apple.os.update-... 11.3 GB    disk3s1s1
   3:                APFS Volume Preboot                 7.1 GB     disk3s2
   4:                APFS Volume Recovery                1.0 GB     disk3s3
   5:                APFS Volume Data                    106.3 GB   disk3s5
   6:                APFS Volume VM                      2.1 GB     disk3s6

/dev/disk4 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *2.1 GB     disk4
   1:                 DOS_FAT_32 RESCUE701               2.1 GB     disk4s1

xavier@Mac Downloads % dd bs=1M conv=fdatasync if=proxmox-ve_9.0-1.iso of=/dev/disk4
dd: unknown conversion fdatasync
 
Last edited:
Ok, updates from 8 to 9 as well. Didn't remove systemd-boot before the upgrade, but after. Everything seems to work, but the NFS storage has a problem.

When I go to VM Disks, I get a "mount error: exit code 32 (600)", any idea's?

Albert

Want to mention that this worked fine before the upgrade.

Found these messages in /var/log/syslog and they seem to be related:
2025-08-06T10:41:56.048404+02:00 pve1 pve-firewall[1305]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2025-08-06T10:41:57.179740+02:00 pve1 pvestatd[1310]: mount error: exit code 32

The pve-firewall messages seem to be generated every 10 seconds.
 
Last edited:
I just upgraded to 9.0.3 and its fine except for one troublesome Fedora VM.

Sometimes it refuses to start with "TASK ERROR: timeout waiting on systemd" and sometimes it appears to start, but trying to get into the console I get "error: failed to run vncproxy" and I can't ssh into the VM, none of the containers on the VM start, and it's pinned at 100% memory usage.

The problem VM has a PCIe device passed through but so is another perfectly functional VM on the node, regardless I tried not passing through any devices and that didn't help. I've toggled memory ballooning, changed the display type, restored from a backup, but so far no success.