Proxmox Virtual Environment 9.0 released!

Hi,

that should not happen if you properly enabled hardware virtualization in the BIOS and if, it should happen for all VMs using hardware virtualization. Please open a new thread pinging me with @fiona and providing more details, like the VM configuration qm config <ID> of an affected and a non-affected VM, replacing <ID> with the actual ID. Please also provide the output of pveversion -v and the system logs/journal from the current boot (or boot you experienced the issue with if you already rebooted), e.g. journalctl -b > /tmp/boot.txt.

Yes, I noticed the machines where I had to do this were performing terribly and had other issues. I ended up migrating the VM to another host then reloading the host with v9 clean and migrating them back. I don't know what happened with these upgrades, I did the same thing as the others. Either way, its resolved.
 
  • Like
Reactions: Johannes S
It appears that fwupd does not honor the EspLocation setting anymore (when added to the now empty fwupd.conf instead of daemon.conf).
 
I had to add libata.force=noncq to /etc/default/grub because I started having lot of SATA link errors, this happened in the log over and over:
Code:
4.209263] ata6: hard resetting link
4.672951] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
4.676669] ata6.00: configured for UDMA/133 (device error ignored)
4.707234] ata6: EH complete
4.745046] ata6.00: Read log 0x10 page 0x00 failed, Emask 0x1
4.745050] ata6: failed to read log page 10h (errno =- 5)
4.745053] ata6.00: NCQ disabled due to excessive errors
4.745054] ata6.00: exception Emask 0x1 SAct 0x1 SErr 0x0 action 0x6
4.745057] ata6.00: irq_stat 0x40000001
4.745059] ata6.00: failed command: READ FPDMA QUEUED
4.745061] ata6.00: cmd 60/08:00:00:00:00/00:00:00:00:00/40 tag 0 ncq dma 4096 in
res 00/00:00:00:00:00/00:00:00:00:00/00 Emask 0x3 (HSM violation)
4.745071] ata6: hard resetting link
5.208941] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
5.234558] ata6.00: configured for UDMA/133 (device error ignored

I found this by googling the logs, there was one mention on archlinux forums that it's caused by CONFIG_SATA_PMP enabled in kernel.
ChatGPT provided me this noncq solution and explained this as (what a time, these can be actually really helpful):
On B450M (and most consumer AMD chipsets), the onboard SATA controller wasn’t designed for heavy multi-disk, server-like workloads.
When you throw 4 spinning HDDs at it, especially with NCQ and PMP code paths enabled, you can see:
  • HSM violation errors (host/device protocol desync)
  • Link resets (hard resetting link)
  • Automatic NCQ disablements after too many errors
  • Sometimes weird timing issues during boot
It’s not that the drives are “bad” — the chipset just has weaker SATA firmware/PHY handling than true server chipsets.
And with PMP compiled in (CONFIG_SATA_PMP=y), the kernel will probe and initialize features your hardware doesn’t handle well.
 
Updated my 5 node cluster (all mini PCs) - All seems well so far. Two observations on the new dashboard metrics -

1) When switching between Maximum and Average views - nothing happens ?!?
2) Depending on time span, I get NO data in Memory Pressure Stall or Network Traffic ?!?

View attachment 88923View attachment 88924
Thanks for posting these, baffling proxmox didnt post their own preview of this on wiki,
 
  • Like
Reactions: rfox
I am seeing something odd since updating to 9.0. Currently on 9.0.4.

I don't have both storage options like I used to on 8. LXCs are only showing this:
1755093577717.png

While VMs are showing this:
1755093561793.png

Any ideas what's going on? Before the upgrade, I didn't have local, just had ext and local-lvm. Not sure where local is coming from and it's the only option for new LXCs. Did repairing my lvm pool during the upgrade cause this? How do I get it back to how it was?
 
Last edited:
Any ideas what's going on?
check your storage config and which content types are set. Those decide which storages are shown, depending on the use-case.
 
Is it still possible to run Centos7 containers on Proxmox 9? With a fix?
Since Centos7 is EOL I wouldn't run it in containers (way to less isolation for my paranoid mind) but in strictly secured VMs. Another reason for VMs: They will even work when containers with CentOs7 won't anymore (which will be the case at some future update). Even better would be to migrate your applications from Centos7 to a distribution which isn't EOL e.g. Rocky Linux or Almalinux (they seem to be the next best thing)
 
  • Like
Reactions: UdoB
Hello Proxmoxers and Proxmox team!

Thank you for all your hard work and this wonderful product!

I apologize in advance if this is not a thread for the bug reports (which was my initial motive to finally open an account here ;-) ). TBH, it is not entirely related to reporting bugs but I would like to ask a few questions as well!

In v9 of Proxmox I noticed the following:

Datacenter --> HA: I add all VM resources I need and then go to the new HA --> Affinity rules --> HA Resources Affinity rules.
There I can define a rule and it works as expected (Proxmox re-schedules VMs according to the rule defined). However, if the rule is highlighted, the Edit button stays gray and it cannot be activated (you cannot click on the button). The only way to do so is to double-click on the rule in order to edit it. That's all on the bug side. Not much but I felt obliged to report it ;-) .

I have been looking into Proxmox roadmap and some other nice features here and there - pretty much every time when Proxmox releases a major version of it. I have a few qustions:

- Project "Cattle and Pets" - would you be willing to consider including an "instance/flavor" into Proxmox mimicing Openstack/Openshift/A public cloud well-known feature? A user would be able to define the properties of a VM (vCPU, RAM, disk) instead of defining everything in the wizard all the time unless it is really necessary. I believe it would also benefit everyone in general, including mass-provisioning of VMs on a large Proxmox cluster (via tofu/terraform for example). This would, I believe, enable users to provision VMs faster.
- https://pve.proxmox.com/wiki/Automated_Installation - It is still not possible to define a management vlan during automated, mass Proxmox installation/provisioning which would make things easier during a large BM provisioning. Any thoughts on that?
- Dynamic VM scheduler (or whatever you wish to call it) ;-) Somehow I believe it will happen in Proxmox VE 9. Am I right? ;-)

Thank you for reading this and best wishes in the feature endeavours!
 
Last edited:
Hi,
Datacenter --> HA: I add all VM resources I need and then go to the new HA --> Affinity rules --> HA Resources Affinity rules.
There I can define a rule and it works as expected (Proxmox re-schedules VMs according to the rule defined). However, if the rule is highlighted, the Edit button stays gray and it cannot be activated (you cannot click on the button). The only way to do so is to double-click on the rule in order to edit it. That's all on the bug side. Not much but I felt obliged to report it ;-) .
thank you for the report! Proposed fix: https://lore.proxmox.com/pve-devel/20250814143425.357868-1-f.ebner@proxmox.com/T/
 
  • Like
Reactions: atromitos
Updated remotely today, didn't get any error. Decided to do a full reboot and the server didn't come back up. :(
Just an FYI.

Guess I'll be up late tonight!
 
Updated remotely today, didn't get any error. Decided to do a full reboot and the server didn't come back up. :(
Just an FYI.

Guess I'll be up late tonight!
I had a similar experience today on a remote backup server. Luckily I had IPMI on this one and logged in and found it booted into BIOS. After fiddling around for a while I enabled CSM and Legacy boot and changed the boot drive to use the non-UEFI entry and got it booted. I still need to figure this out because it was definitely booted in UEFI mode previously and my other primary server which is identical hardware is still booted in UEFI mode. I need to understand this better before I pull the trigger on updating the primary. It feels like going backwards to legacy mode might not be the best idea but for now it's fine on my backup server.