Proxmox Virtual Environment 9.0 released!

ip a a 192.168.100.30/24 dev ens3
Error: either "local" is duplicate, or "device" is garbage.

That did not work. My /etc/network/interfaces is clean without much in it.

auto ens3
iface ens3 inet static
address 192.168.100.30
netmask 255.255.255.0
gateway 192.168.100.1
 
/usr/share/ifupdown2/ifupdown/main.py

parser.readfp(configFP)
This worked. My issue was not with the PVE upgrade to 9.0. That went smooth as butter. I then upgraded the Proxmox Backup Server to 4.0, and after reboot the networking was hard down. Apparently part of the process upgraded python, and the latest version replaced the syntax used for that particular method. This fixed the networking issues, but sure seems unrelated to Proxmox, and mode related to Linux directly. Scary as this could of happened to any user that upgraded the distribution to Trixie.
 
The main issue here is probably that the sources for the Debian repositories got updated to trixie, but there are still bookworm PVE repositories left. Please make sure that every repository is correctly configured and then dist-upgrade.
After I got the networking back up, I did find a pbs-enterprise.sources file and # all the lines out. This allowed the apt update to run smoothly with no errors. Thank you>

But there were no packages to install or update. This issue probably will effect other users that update to 4.0 of Proxmox Backup Server. Just a heads up.

** I stand corrected. I had to disable the Enterprise, and then ADD the non-production repository. SO let the upgrade begin. LOL
*** UPDATE:
Proxmox 9.0 went OK. Did have some NFS Storage that went offline. Had to run a repair from console, and it took awhile. Then the NFS was online and accessible. Had to run: lvconvert --repair SSD-STORAGE/SSD_STORAGE (This took a while to complete. PVE went offline, but then came back online and storage was accessible.) SSD-STORAGE is my Pool Name and Volume Name. (lvs -a)

Proxmox Backup Server was good except for the deprecated command I had to replace. Hopefully other won't run into that. But all is working. Super thankful for this support forum as they got me back up and running. I appreciate everyone's contributions.
 
Last edited:
Just did a completely fresh install, tried this twice (both on fresh installs).
The very first container created has a bug where it does not show as "on" in the tree despite very clearly being on and operating.

There are 3 solutions:
-Make a 2nd additional container, then both of them will magically appear "on".
-Delete and remake the first container.
-Restart the server
 

Attachments

  • Screenshot 2025-08-06 182913.png
    Screenshot 2025-08-06 182913.png
    40.2 KB · Views: 38
ChatGPT helped me fix it. Sharing it here in case someone else needs.


First, you need to repair the broken thin‐pool metadata, then bring it (and its VG) back online, and finally re-enable it in Proxmox:



1. Repair the thin-pool metadata​


The “status:64” error means the pool’s metadata is corrupt or full. Repair it with:
lvconvert --repair /dev/pve/data
This reads the damaged metadata LV, writes a repaired copy from the pmspare LV, and swaps it back in place.

2. Activate the VG and pool​


Once the repair finishes without errors, activate your volume group and the pool itself:
vgchange -ay
lvchange -ay /dev/pve/data
This makes /dev/pve/data writable and available again.

You can confirm it’s active when lvs shows w (writable) instead of i in the third attribute column for “data”:
lvs -o lv_name,lv_attr

3. Re-enable the storage in Proxmox​


Finally, clear the “disabled” flag so Proxmox will use it:
pvesm set local-lvm --disable 0
systemctl restart pvedaemon pveproxy pvestatd
This flips local-lvm back on in /etc/pve/storage.cfg and reloads the storage daemons.

Verify with:
pvesm status
You should now see local-lvm listed as active again.
Thanks, this worked for me!
 
  • Like
Reactions: Jedis and Damon1974
Just hopping in to say I upgraded from 8.4.1 to 9 and it wiped my efi partition. Unlike ulistermclane, I had a default ext4 lvm partition.
I believe I'm experiencing this same issue (unable to boot after upgrade; eventually it takes me to BIOS). Proxmox is installed on an ext4 partition on an NVMe. I don't know how to resolve this. Please help.

Worth noting, using the Proxmox installation ISO version 9, and choosing Rescue Boot, it boots me into my local Proxmox.
 
Last edited:
I get the following warning from pve8to9:


WARN: The matching CPU microcode package 'intel-microcode' could not be found! Consider installing it to receive the latest security and bug fixes for your CPU.
apt install intel-microcode


But if i try to install i get:
apt install intel-microcode
E: Unable to locate package intel-microcode

Edit: Had to add non-free-firmware to the repository
 
Last edited:
Everything is running fine for me, but I don't see a difference in disk IO performance between ZFS Direct IO being enabled or disabled. Is PVE overriding this?

This is how I (or at least I think I am) setting ZFS direct IO to be enabled:
Code:
zfs set direct=always rpool

Then, within my ZFS pool named rpool, I create a file and run fio with random write and direct=0 and direct=1 and the result is about the same:
Code:
apt install sysstat -y
apt install fio -y

rm /dev/zvol/rpool/data/test.file
fio --filename=/dev/zvol/rpool/data/test.file --name=sync_randrw --rw=randrw --bs=4M --direct=0 --sync=1 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=8G --loops=2 --group_reporting

rm /dev/zvol/rpool/data/test.file
fio --filename=/dev/zvol/rpool/data/test.file --name=sync_randrw --rw=randrw --bs=4M --direct=1 --sync=1 --numjobs=1 --ioengine=psync --iodepth=1 --refill_buffers --size=8G --loops=2 --group_reporting

Reason for repeating is I don't understand how fio is ZFS aware and it might bypass that setting.

So then I did a couple of other things:
1. rsync on host
Code:
zfs set direct=always rpool
rsync -aP  /dev/zvol/rpool/data/test_random_disk.qcow2 /dev/zvol/rpool/data/test_random_disk_io_direct_enabled.qcow2

zfs set direct=disabled rpool
rsync -aP  /dev/zvol/rpool/data/test_random_disk.qcow2 /dev/zvol/rpool/data/test_random_disk_io_direct_disabled.qcow2

2. Copy/Paste large data file on Windows
Code:
fsutil file createnew C:\test_disk.dat 8589934592
Copy/paste using UI (don't do it twice, standby cache will cheat)

I also tried rebooting and repeating, no change.
 
Hello, after update to V9 and a reboot is got this error: "()TASK ERROR: activating LV 'vBIG/vBIG' failed: Check of pool vBIG/vBIG failed (status:64). Some hinds?

This was the Key.
lvconvert --repair vBIG/vBIG
 
Last edited:
Successful upgrade here. Two node cluster + Q device. In place upgrade with no subscription repo as it is used in homelab. All ZFS pools for storage. Minor bumps during the upgrade process but fairly smooth.

Only problem that I haven't been able to solve... I had Webauthn setup for MFA for my GUI users. It's been successfully working on 8.4 for months. However, now with the upgrade, I'm getting a 401 auth error.

I've taken the following steps:
1. Confirmed that original settings were still in place.
2. Confirmed that the security certificate is still valid.
3. Deleted and reentered my Webauthn passkey.
4. Tried different formats for Webauthn Name/Origin/ID.

No luck so far. Definitely not a deal breaker but if anyone's faced a similar issue, would love to hear if you've found a solution.
 
I believe I'm experiencing this same issue (unable to boot after upgrade; eventually it takes me to BIOS). Proxmox is installed on an ext4 partition on an NVMe. I don't know how to resolve this. Please help.

Worth noting, using the Proxmox installation ISO version 9, and choosing Rescue Boot, it boots me into my local Proxmox.
I solved my issue by:
  1. Booting into Rescue Mode with the Proxmox Installation ISO (version 9) which booted into my local Proxmox install.
  2. Running proxmox-boot-tool init /dev/nvme0n1p2
    1. /dev/nvme0n1p2 is my EFI boot partition
  3. Reboot
Essentially, my boot partition needed to be initialized. I'm not quite sure why it didn't happen automatically during the upgrade.
 
Last edited:
I solved my issue by:
  1. Booting into Rescue Mode with the Proxmox Installation ISO (version 9) which booted into my local Proxmox install.
  2. Running proxmox-boot-tool init /dev/nvme0n1p2
    1. /dev/nvme0n1p2 is my EFI boot partition
  3. Reboot
Essentially, my boot partition needed to be initialized. I'm not quite sure why it didn't happen automatically during the upgrade.
This was going to be my next step. I tried it before I made my post but also my iso did not have the grub packages install so I just shut it down and decided I would come back to it later.
 
We tried to make those checks as safe as possible so this should not cause issues.

A bit of background - currently systems:
* having root on ZFS or BTRFS
* booting using UEFI (not legacy bios boot)
* not having secure-boot enabled
use systemd-boot for booting
`proxmox-boot-tool status` should provide some helpful information

Additionally the `systemd-boot` package got split up a bit further in trixie - and proxmox-boot-tool only needs `systemd-boot-tools` and `systemd-boot-efi` - the `systemd-boot` meta-package is currently incompatible (as it tries updating the EFI, despite it not being mounted, which causes an error upon upgrade)

I hope this helps!

Thanks for this clarification, @Stoiko Ivanov . :)
I'm using ZFS with UEFI, and secure boot disabled, and SystemD-Boot is used on my nodes instead of Grub, so this applies to me.
I'm on the latest version of PVE 8 in the enterprise repo.

I appear to have the full SystemD boot package installed:

Bash:
root@andromeda2:~# apt search systemd-boot
Sorting... Done
Full Text Search... Done
proxmox-kernel-helper/stable,now 8.1.4 all [installed]
  Function for various kernel maintenance tasks.

pve-kernel-helper/stable 7.3-4 all
  Function for various kernel maintenance tasks.

systemd-boot/stable-security,now 252.38-1~deb12u1 amd64 [installed]
  simple UEFI boot manager - tools and services

systemd-boot-dbgsym/stable 252.12-pmx1 amd64
  debug symbols for systemd-boot

systemd-boot-efi/stable-security,now 252.38-1~deb12u1 amd64 [installed]
  simple UEFI boot manager - EFI binaries

I'm still a bit confused about this. I looked the the wiki ( https://pve.proxmox.com/wiki/Upgrad...ation_automatically_and_should_be_uninstalled ) and understand that the metapackage needs to be uninstalled, but I'm not sure how to do that on PVE 8 without breaking PVE 8.

The Wiki:

Systemd-boot meta-package changes the bootloader configuration automatically and should be uninstalled​


With Debian Trixie the systemd-boot package got split up a bit further into systemd-boot-efi (containing the EFI-binary used for booting), systemd-boot-tools (containing bootctl) and the systemd-boot meta-package (containing hooks which run upon upgrades of itself and other packagesand install systemd-boot as bootloader).

As Proxmox Systems usually handle the installation of systemd-boot-efi as bootloader using proxmox-boot-tool the meta-package systemd-boot should be removed.The package was automatically shipped for systems installed from the PVE 8.1 to PVE 8.4 ISOs, as it contained bootctl in bookworm.

If the pve8to9 checklist script suggests it, the systemd-boot meta-package is safe to remove unless you manually installed it and are using systemd-boot as a bootloader. Should systemd-boot-efi and systemd-boot-tools be required, pve8to9 will warn you accordingly.

The pve8to9 script, though, doesn't warn me about SystemD-Boot.
INFO: Checking bootloader configuration...
SKIP: not yet upgraded, systemd-boot still needed for bootctl
I'm not sure if that means I'm okay to upgrade or not. This is really confusing.
 
Run the ugrade on a single host. Almost everything ist fine. If I run pve8to9 --full after upgrade I get. Running pve8to9--full prior update returns no warning.
Code:
WARN: Found '3' RRD files that have not yet been migrated to the new schema.
pve-storage-9.0/node2/nfs-storage1
         pve-storage-9.0/node2/nfs-storage1
         pve-storage-9.0/node2/nfs-storage1
        Please run the following command manually:
        /usr/libexec/proxmox/proxmox-rrd-migration-tool --migrate
Running the suggested /usr/libexec/proxmox/proxmox-rrd-migration-tool --migrate returns
Code:
Migrating RRD metrics data for nodes…
Migrated metrics of all nodes to new format
Migrating RRD metrics data for storages…
Migrated metrics of all storages to new format
Migrating RRD metrics data for virtual guests…
Using 6 thread(s)
No guest metrics to migrate
 
term.log attached.

Sequence that hung is the second last on the log.
that was not the right part of the log (that seems to have been an upgrade to 8.3/8.4 packages?) - but it also looks like it got interrupted at the grub install stage which is peculiar..
 
I'm using ZFS with UEFI, and secure boot disabled, and SystemD-Boot is used on my nodes instead of Grub, so this applies to me.
Yes
I appear to have the full SystemD boot package installed:
Yes - because you're still on PVE-8 (bookworm) - I thought that the following part should explain that:
With Debian Trixie the systemd-boot package got split up a bit further into systemd-boot-efi (containing the EFI-binary used for booting), systemd-boot-tools (containing bootctl) and the systemd-boot meta-package (containing hooks which run upon upgrades of itself and other packages and install systemd-boot as bootloader).

which is also why the pve8to9 script does not warn you before upgrading to pve-9 (trixie):
INFO: Checking bootloader configuration...
SKIP: not yet upgraded, systemd-boot still needed for bootctl
I will warn you after upgrading to 9 (at that point the system will have `systemd-boot-tools`, `systemd-boot-efi` (both of which we need) and `systemd-boot` (which you want to get rid of installed)

I'm not sure if that means I'm okay to upgrade or not. This is really confusing.
Thanks for the feedback - much appreciated (the clearer our guides are the smoother the upgrades for our community will be) - any suggestions how to improve that part? - Thanks!
 
  • Like
Reactions: SInisterPisces