Proxmox Virtual Environment 9.0 released!

Uppgrading to proxmox 9 , on 2 servers without problems following the wiki
1. threadripper 2990wx , zfs nvme boot, passthrogh onboard ethernet 1gb to pfsense no problems, lxc works, shared volumes works, backup-restore to pbs4 works
2 threadripper 3990x, zfs nvme boot, pci gpu passthrogh no problems, lxc works, shared volumes works, backup-restore to pbs4 works
 
Hi,
I just updated 3 of my 4 homelab proxmox hosts from 8.4.x to 9.0.6. On all the hosts they had a user that was given Administrator level privileges. One of those permissions was the ability to use the "drive_add" command in the vm monitor. In proxmox 8 this command succeeded with the admin user but now it is stating it is a root only command. Is there any way to resolve this?
no, direct interactions via HMP monitor are pretty much uncontained and thus were restricted to root-only. You can add/hotplug disks via qm set 123 -scsi0 .... Like that Proxmox VE itself will also know about the disks for backup/other operations etc.
 
  • Like
Reactions: Johannes S
I upgraded my older server that has been upgraded from pve 4 to 8 current. It passed the pve8to9 --full. I ran into a problem and needed to install usrmerge. Was not a big deal but would have been easier if pve8to9 --full had checked the directories before I changed the apt sources. If it warned me it would have been just apt install usrmerge.

Other than that all went well following the instructions.

Thanks
 
I don't know how you checked it, but on pve 9.0.6 it still doesn't work.
There were the following updates last night. Maybe this covers that fix?
Code:
Package Name    Installed Version    Available Version
librrd8t64        1.7.2-4.2+pve2        1.7.2-4.2+pve3
librrds-perl      1.7.2-4.2+pve2        1.7.2-4.2+pve3
rrdcached         1.7.2-4.2+pve2        1.7.2-4.2+pve3
 
  • Like
Reactions: Andrii.B
Can you tell me when this update will be published to get this update?
Patches are applied and will be part of the next release of the following packages:

qemu-server newer than 9.0.19
pve-container newer than 6.0.9
 
There were the following updates last night. Maybe this covers that fix?
Code:
Package Name    Installed Version    Available Version
librrd8t64        1.7.2-4.2+pve2        1.7.2-4.2+pve3
librrds-perl      1.7.2-4.2+pve2        1.7.2-4.2+pve3
rrdcached         1.7.2-4.2+pve2        1.7.2-4.2+pve3
These packages aren't available on my side yet.
 
Patches are applied and will be part of the next release of the following packages:

qemu-server newer than 9.0.19
pve-container newer than 6.0.9
I have no newer versions yet. There isn't updated. I'm using the enterprise repositories.
 
I have no newer versions yet. There isn't updated. I'm using the enterprise repositories.
the patch has just been applied. a newer version is not yet released. But once you have a version newer than the mentioned ones, they will include the fix.
 
  • Like
Reactions: Andrii.B
I was testing the affinity rules, and they worked correctly.
I only noticed that in the "node affinity rules" rules, if a rule is disabled, then when trying to "reactivate" it, it doesn't update the status. I was only able to reactivate it by editing the "/etc/pve/ha/rules.cfg" file and removing the "disabled" line from that rule.
Thanks for the report! This is fixed with pve-manager >= 9.0.7 [0], which should be packaged soon for pve-no-subscription.

[0] https://git.proxmox.com/?p=pve-manager.git;a=commit;h=4008a6472ada2bbd0f21c15fd7f5b047d71fcbd3
 
Last edited:
We tried to make those checks as safe as possible so this should not cause issues.

A bit of background - currently systems:
* having root on ZFS or BTRFS
* booting using UEFI (not legacy bios boot)
* not having secure-boot enabled
use systemd-boot for booting
`proxmox-boot-tool status` should provide some helpful information

Additionally the `systemd-boot` package got split up a bit further in trixie - and proxmox-boot-tool only needs `systemd-boot-tools` and `systemd-boot-efi` - the `systemd-boot` meta-package is currently incompatible (as it tries updating the EFI, despite it not being mounted, which causes an error upon upgrade)

I hope this helps!
Hi,

pve8to9 suggests to remove `systemd-boot` package, but my root is on a ZFS mirror. BIOS is set to "UEFI and Legacy Boot" and apparantly legacy boot is used.

Code:
# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
B7A1-2FB3 is configured with: grub (versions: 6.5.13-6-pve, 6.8.12-11-pve, 6.8.12-13-pve, 6.8.12-8-pve)
B7A3-6204 is configured with: grub (versions: 6.5.13-6-pve, 6.8.12-11-pve, 6.8.12-13-pve, 6.8.12-8-pve)

What's the problem with legacy boot and root on ZFS?
 
  • Like
Reactions: sjjh
I get the following warning from pve8to9:


WARN: The matching CPU microcode package 'intel-microcode' could not be found! Consider installing it to receive the latest security and bug fixes for your CPU.
apt install intel-microcode


But if i try to install i get:
apt install intel-microcode
E: Unable to locate package intel-microcode

Edit: Had to add non-free-firmware to the repository
I get the error

Package 'intel-microcode' has no installation candidate

How can i fix this?
 
Last edited:
Anyone have a problem with import VM's form ESXi using built-in tool?
I have Proxmox 9.0.6 cluster (no-subscribtion) and when I'm trying to migrate VM i still got error:

`TASK ERROR: unable to create VM 101 - cannot import from 'esx4:ha-datacenter/esx4-local/Ubnt.Srv/Ubnt.Srv.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O raw /run/pve/import/esxi/esx4/mnt/ha-datacenter/esx4-local/Ubnt.Srv/Ubnt.Srv.vmdk zeroinit:/var/lib/pve/local-btrfs/images/101/vm-101-disk-0/disk.raw' failed: exit code 1`

It doesn't matter if storage on proxmox is local or shared (ISCSI);

On Proxmox 8.4 everything works fine :)
 
Hi,
Anyone have a problem with import VM's form ESXi using built-in tool?
I have Proxmox 9.0.6 cluster (no-subscribtion) and when I'm trying to migrate VM i still got error:

`TASK ERROR: unable to create VM 101 - cannot import from 'esx4:ha-datacenter/esx4-local/Ubnt.Srv/Ubnt.Srv.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O raw /run/pve/import/esxi/esx4/mnt/ha-datacenter/esx4-local/Ubnt.Srv/Ubnt.Srv.vmdk zeroinit:/var/lib/pve/local-btrfs/images/101/vm-101-disk-0/disk.raw' failed: exit code 1`

It doesn't matter if storage on proxmox is local or shared (ISCSI);

On Proxmox 8.4 everything works fine :)
please share the full task log and output of pveversion -v. Is there anything in the system journal around the time the issue happens?