Proxmox VE 9.0 BETA released!

First of all, thanks for this awesome beta release — great work!

We've noticed one major issue with this version: VM boot times (using OVMF) have become extremely slow. It now takes about 1 minute and 30 seconds from pressing "Start" to seeing any console output or UEFI POST completion.

This behavior did not occur on PVE 8.x — VM startup was almost instant in comparison.

During this delay, CPU usage is consistently high (around 75–80%, regardless of how many cores are assigned), and RAM usage hovers at about 100 MB.

Below are two screenshots showing the VM state during this stall:

2025-07-25_22-25_1.png

2025-07-25_22-25_3.png

And here's the config of one of the affected VMs — though we’ve confirmed the issue occurs on all VMs across all nodes in the cluster:

Code:
agent: 1
bios: ovmf
boot: order=virtio0
cores: 4
cpu: x86-64-v3
efidisk0: ceph-nvme01:vm-117-disk-1,efitype=4m,pre-enrolled-keys=1,size=528K
hotplug: disk,network
machine: q35
memory: 4096
name: <redacted>
net0: virtio=F2:AC:38:7B:9A:A6,bridge=vmbr0,tag=100
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-single
smbios1: uuid=8b8a56a6-0684-4b2b-a50a-69b23b789235
sockets: 1
tablet: 0
tags: <redacted>
vga: virtio
virtio0: ceph-nvme01:vm-117-disk-0,discard=on,iothread=1,size=20G
vmgenid: 3890b812-a1fc-4331-a4c0-49d88c7632a6

Some more information:

Bash:
root@node02 ~ # pveversion
pve-manager/9.0.0~11/c474e5a0b4bd391d (running kernel: 6.14.8-2-pve)
root@node02 ~ # qemu-system-x86_64 --version
QEMU emulator version 10.0.2 (pve-qemu-kvm_10.0.2-4)
Copyright (c) 2003-2025 Fabrice Bellard and the QEMU Project developers

Has anyone else seen this behavior? Could this be a known issue in the beta?
 
  • Like
Reactions: SInisterPisces
[B]Support for snapshots as volume chains on Directory/NFS/CIFS storages (technology preview).[/B]
Am I correct in assuming that this function is similar to LVM and that each time a snapshot is taken with QCOW2's external snapshot function, a QCOW2 file is created and a chain configuration is created?
I took a snapshot of a virtual machine with a QCOW2 disk on NFS storage with “Allow Snapshot as volume chain” enabled when newly mounted, and it seems to be taken with the same internal snapshot as the previous version.
Is this not available for NFS in 9.0 Beta1?

nfs: NFS
export /NFS
path /mnt/pve/NFS
server 192.168.1.183
content images
prune-backups keep-all=1
snapshot-as-volume-chain 1


# qemu-img info /mnt/pve/NFS/images/101/vm-101-disk-0.qcow2
image: /mnt/pve/NFS/images/101/vm-101-disk-0.qcow2
file format: qcow2
virtual size: 32 GiB (34359738368 bytes)
disk size: 5.73 MiB
cluster_size: 65536
Snapshot list:
ID TAG VM_SIZE DATE VM_CLOCK ICOUNT
1 TEST 0 B 2025-07-26 10:34:06 0010:40:02.217 --
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
Child node '/file':
filename: /mnt/pve/NFS/images/101/vm-101-disk-0.qcow2
protocol type: file
file length: 32 GiB (3436530944
 
Hi,
We've noticed one major issue with this version: VM boot times (using OVMF) have become extremely slow. It now takes about 1 minute and 30 seconds from pressing "Start" to seeing any console output or UEFI POST completion.

This behavior did not occur on PVE 8.x — VM startup was almost instant in comparison.
it is a known issue without using the proper cache mode, but Proxmox VE 9 should set the proper cache mode for the EFI disk: https://git.proxmox.com/?p=pve-stor...3cb0c3398c9fc19d305d9c36a74a4797715d009e#l564

efidisk0: ceph-nvme01:vm-117-disk-1,efitype=4m,pre-enrolled-keys=1,size=528K
What do you get when you query the cache mode for the image
Code:
rbd config image get vm-117-disk-1 rbd_cache_policy
?

Is this a PVE-managed or an external cluster? Do you maybe override the rbd_cache_policy in your Ceph client configuration or somewhere?