Virtual machines in Proxmox 9 randomly freeze during reboot

urog

New Member
Dec 6, 2025
3
0
1
Hi everyone, I recently installed Proxmox 9. While configuring several VMs, I had to restart them a few times. At one point, I noticed that one of the machines froze, which really intrigued me because I've never encountered anything like this before. I've been using Proxmox since version 5 in 2018. Initially, I suspected, and confirmed on various forums, that it was probably a disk issue. I checked the VM disks and the host disk... everything seemed fine. However, if I shut down the machine and restarted it, sometimes it would boot and sometimes it wouldn't. I decided to test on a different host. I installed Proxmox 9 from a template, performed a clean install of Trixie, and made 10 copies. Then I performed 10 shutdowns and restarts on all the machines. Five times out of those 10 cycles, between one and three machines froze. On a third host, I installed Proxmox 8 from a template and copied the VM seed from version 9, making 10 copies. I performed 40 reboot cycles on these 10 copies, and none of them ever froze.
Version 8 is supported until next September, and I'd prefer to have version 9 right away because what I'll be deploying will be there for two or three years. Has anyone else noticed anything similar? Any recommendations? Should I stick with version 8 or take the risk with version 9?
 
I encountered this several times with EL9 VMs.
On Proxmox 9 but also on Proxmox 8.

Couldn't find a solution so far.
 
Hi,
please share the configurations of some affected VMs, qm config ID replacing ID with the actual numerical ID and the output of pveversion -v. What kind of guest OS and kernel is running inside the VM?
 
Hi,
please share the configurations of some affected VMs, qm config ID replacing ID with the actual numerical ID and the output of pveversion -v. What kind of guest OS and kernel is running inside the VM?
qm config 100
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
efidisk0: local:100/vm-100-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
ide2: local:iso/debian-13.0.0-amd64-DVD-1.iso,media=cdrom,size=3900480K
memory: 1024
meta: creation-qemu=9.2.0,ctime=1764965128
name: trixie0
net0: virtio=02:00:00:61:76:59,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local:100/vm-100-disk-1.qcow2,iothread=1,size=10G
scsihw: virtio-scsi-single
smbios1: uuid=d424b9b6-95eb-40f4-a120-c391b852a973
sockets: 1
vmgenid: f4d6c179-1994-48b9-99c1-16c4ad5626d5

_______

qm config 101
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
efidisk0: local:101/vm-101-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
ide2: local:iso/debian-13.0.0-amd64-DVD-1.iso,media=cdrom,size=3900480K
memory: 1024
meta: creation-qemu=9.2.0,ctime=1764965128
name: moodle
net0: virtio=02:00:00:61:76:59,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local:101/vm-101-disk-1.qcow2,iothread=1,size=10G
scsihw: virtio-scsi-single
smbios1: uuid=d424b9b6-95eb-40f4-a120-c391b852a973
sockets: 1
vmgenid: d6c87fef-e8b6-46aa-a0b8-e8c96aee4b4c

_____
qm config 500
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
efidisk0: local:500/vm-500-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
ide2: local:iso/debian-13.0.0-amd64-DVD-1.iso,media=cdrom,size=3900480K
memory: 8192
meta: creation-qemu=9.2.0,ctime=1764965128
name: mcvFR
net0: virtio=02:00:00:61:76:59,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local:500/vm-500-disk-1.qcow2,iothread=1,size=128G
scsihw: virtio-scsi-single
smbios1: uuid=d424b9b6-95eb-40f4-a120-c391b852a973
sockets: 1
vmgenid: abfe5f40-3d64-4002-aa10-11faf3650f72
_____


pveversion -v
proxmox-ve: 9.1.0 (running kernel: 6.17.2-2-pve)
pve-manager: 9.1.2 (running version: 9.1.2/9d436f37a0ac4172)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.2-2-pve-signed: 6.17.2-2
proxmox-kernel-6.17: 6.17.2-2
amd64-microcode: 3.20250311.1
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx11
intel-microcode: 3.20250812.1~deb13u1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.4
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.0
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.3
libpve-rs-perl: 0.11.3
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-3
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.0-1
proxmox-backup-file-restore: 4.1.0-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.2
pve-cluster: 9.0.7
pve-container: 6.0.18
pve-docs: 9.1.1
pve-edk2-firmware: not correctly installed
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.0.8
pve-i18n: 3.6.5
pve-qemu-kvm: 10.1.2-4
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.1
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1

___
All use 6.12.57+deb13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.57-1 (2025-11-05) x86_64
 
Hi,
please share the configurations of some affected VMs, qm config ID replacing ID with the actual numerical ID and the output of pveversion -v. What kind of guest OS and kernel is running inside the VM?

I censored sensitive information.

Bash:
# qm config 141
agent: 1
boot: order=ide2;scsi0;net0
cipassword: **********
ciupgrade: 0
ciuser: xxx
cores: 8
cpu: host,flags=-md-clear;+pcid;+spec-ctrl;-ssbd;-hv-tlbflush
ide0: LSI-xxx:141/vm-141-cloudinit.qcow2,media=cdrom,size=4M
ide2: none,media=cdrom
ipconfig0: ip=xxx,gw=xxx
memory: 8192
meta: creation-qemu=9.2.0,ctime=1745741974
name: xxx
net0: virtio=xxx,bridge=xxx
numa: 0
ostype: l26
scsi0: LSI-xxx:141/vm-141-disk-0.qcow2,cache=writeback,discard=on,iothread=1,size=100G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=b79bb4ac-8f49-4959-999e-d1b44466d958
sockets: 1
sshkeys: xxx
vga: qxl
vmgenid: 7d58d360-7cfa-40f0-b212-cd5ab9e70215




Bash:
# pveversion -v
proxmox-ve: 9.1.0 (running kernel: 6.17.2-2-pve)
pve-manager: 9.1.1 (running version: 9.1.1/42db4a6cf33dac83)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.2-2-pve-signed: 6.17.2-2
proxmox-kernel-6.17: 6.17.2-2
proxmox-kernel-6.8: 6.8.12-17
proxmox-kernel-6.8.12-17-pve-signed: 6.8.12-17
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.4
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.0.15
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.3
libpve-rs-perl: 0.11.3
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-3
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.0-1
proxmox-backup-file-restore: 4.1.0-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.2
pve-cluster: 9.0.7
pve-container: 6.0.18
pve-docs: 9.1.1
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.0.8
pve-i18n: 3.6.4
pve-qemu-kvm: 10.1.2-4
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.0
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1


CPU is Intel Xeon E5-2696 v2 @ 2.50GHz (2 Sockets)
 
Last edited:
I'd be a bit surprised if it was the same issue, because of different guests and BIOS configurations, and @ab-wer being affected in Proxmox VE 8 already.

When it's frozen, what do you see when you run qm status ID --verbose replacing ID with the actual numerical ID? What do you see in the VM Console? Is it before finishing shutdown or while doing the next boot? How does CPU/RAM usage of the VM look like? Do you see anything in the system logs when you check after the next successful boot?

@urog can you reproduce the issue with a newly created VM too? Also when you use SeaBIOS?

@ab-wer if you can reproduce the issue somewhat reliably too, you might want to check if it also happens when using OVMF as BIOS.
 
I'd be a bit surprised if it was the same issue, because of different guests and BIOS configurations, and @ab-wer being affected in Proxmox VE 8 already.

When it's frozen, what do you see when you run qm status ID --verbose replacing ID with the actual numerical ID? What do you see in the VM Console? Is it before finishing shutdown or while doing the next boot? How does CPU/RAM usage of the VM look like? Do you see anything in the system logs when you check after the next successful boot?

@urog can you reproduce the issue with a newly created VM too? Also when you use SeaBIOS?

@ab-wer if you can reproduce the issue somewhat reliably too, you might want to check if it also happens when using OVMF as BIOS.
Hello fiona, sorry for the late reply. I thoroughly searched the Proxmox and virtual machine logs and found nothing. The freezing occurs during reboot, with both migrated and fresh virtual machines. I'm currently on a few days off, but I'll be able to post some of the output you requested in January.