VM Reboot Issue - VM stuck on Proxmox start boot option screen

Hey everyone subscribed to this thread, I believe @sir4ndr3w and his proposed solution is the way to go, I've switched a fair few of my VMs away from the Writeback Cache mode, specially the ones I know that had the issue and almost a month including Patch Tuesday reboots and they don't experience that issue anymore and restart without a problem.

I'm not sure what the actual cause is but perhaps the Windows server best practices wiki article should be changed to keep the Cache mode on default, especially when using UEFI bios settings?
 
  • Like
Reactions: itNGO
Hey,
we have exactly the same problem. Some (UEFI) Windows-Server VMs on a ceph storage stuck sometimes at the "Proxmox"-Logo Boot-Screen. Unfortunately all vm disks are on "Default (no cache)" mode. So this doesn't fix anything for us.
 
  • Like
Reactions: carles89
Yeah so no dice for me still, I wish I knew what the cause of this was because I just have to sit here and wait until it decides to proceed with booting the operating system, surely the solution isn't to use BIOS booting...

Just sits here at max CPU usage:

1742859844971.png

Hardware settings and boot logo stuck:

1742859924478.png1742859858949.png
 
Yeah so no dice for me still, I wish I knew what the cause of this was because I just have to sit here and wait until it decides to proceed with booting the operating system, surely the solution isn't to use BIOS booting...

Just sits here at max CPU usage:

View attachment 84066

Hardware settings and boot logo stuck:

View attachment 84068View attachment 84067
I recently had this high cpu usage and editing my processor and enabling numa support fixed it for me, still no idea why.
 
I recently had this high cpu usage and editing my processor and enabling numa support fixed it for me, still no idea why.
I really wish this was the solution but I have a lot of my VMs with numa support enabled still happens. The same VMs that all have BIOS boot instead of UEFI whether or not they have numa enabled or not never experience this problem.

So far the only real workaround i've had was installing this which will forcefully stop and start the VM stuck in that state, as it usually happens overnight when nobody is around this at least ensures the VM is up and running when the day starts (script), not ideal but I have no other ideas on what to do or what kind of troubleshooting needs doing.
 
Last edited:
Hi,

If you're using ZFS as your storage backend, make sure to leave some free space and never exceed 85% usage. ZFS is a copy-on-write (COW) filesystem, and performance can degrade significantly when space gets tight. With ZFS, two major concerns are available RAM and free disk space, especially because of its copy-on-write (COW) mechanism. Running out of either can lead to serious performance issues or even system failure.

Be careful with the disk sizes allocated to VMs if you're using thin provisioning on your storage.
The system may crash when the physical storage runs out of space, even if there is still unused space virtually allocated.

Also, for the cache mode, it is recommended to use write-through, as it is safer in case of a power outage on the Proxmox server. This mode ensures that data is written directly to the physical disk, reducing the risk of data loss compared to other caching options like write-back.



 
  • Like
Reactions: Hector76
Hi,

If you're using ZFS as your storage backend, make sure to leave some free space and never exceed 85% usage. ZFS is a copy-on-write (COW) filesystem, and performance can degrade significantly when space gets tight. With ZFS, two major concerns are available RAM and free disk space, especially because of its copy-on-write (COW) mechanism. Running out of either can lead to serious performance issues or even system failure.

Be careful with the disk sizes allocated to VMs if you're using thin provisioning on your storage.
The system may crash when the physical storage runs out of space, even if there is still unused space virtually allocated.

Also, for the cache mode, it is recommended to use write-through, as it is safer in case of a power outage on the Proxmox server. This mode ensures that data is written directly to the physical disk, reducing the risk of data loss compared to other caching options like write-back.



A lot has changed since I initially posted this but I only ever use the default storage option and as well everyone else in this thread uses Ceph and not ZFS.
 
  • Like
Reactions: Hector76
I'm facing this same issue too.
Most of the settings is default.
My VM is Debian 12.10 on PVE 8.4.1 and using LVM Thin with 50% of free disk size.
Every time I had to shutdown & reboot the VM, every single time I had to add the boot option via Boot Maintenance Manager > Boot Option > Add Boot Option (https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries) or Boot From File
- I have EFI disk attached.
- Disable secure boot option.

Finally problem solved.
Remove existing EFI DIsk and add a new one with unselect the Pre-Enrolls Keys
 
Last edited:
  • Like
Reactions: Hector76