All VMs don't boot anymore after upgrade to Proxmox 8

I just did this and landed again at the "Booting from Hard Disk..." endlessly waiting. Both LVM and ZFS do this.

I'm running out of ideas now... Any other hail mary I can try? Could paid support help me fix this?

FYI, this was the content of start-vm.sh:

Code:
root@proxmox:~# cat "/root/start-vm.sh"
/usr/bin/kvm \
  -id 200 \
  -name 'docker,debug-threads=on' \
  -no-shutdown \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/200.qmp,server=on,wait=off' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/200.pid \
  -daemonize \
  -smbios 'type=1,uuid=dedd51b8-827d-499b-b82a-816512699db6' \
  -smp '16,sockets=1,cores=16,maxcpus=16' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vnc 'unix:/var/run/qemu-server/200.vnc,password=on' \
  -cpu host,+kvm_pv_eoi,+kvm_pv_unhalt \
  -m 16384 \
  -object 'iothread,id=iothread-virtio0' \
  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
  -device 'vmgenid,guid=00180eb2-cea3-436c-a6e7-897355f3fdb6' \
  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
  -device 'qemu-xhci,p2=15,p3=15,id=xhci,bus=pci.1,addr=0x1b' \
  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
  -device 'usb-host,bus=xhci.0,port=1,vendorid=0x1a6e,productid=0x089a,id=usb0' \
  -device 'usb-host,bus=xhci.0,port=2,vendorid=0x18d1,productid=0x9302,id=usb1' \
  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
  -chardev 'socket,path=/var/run/qemu-server/200.qga,server=on,wait=off,id=qga0' \
  -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' \
  -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:6959e56c8c90' \
  -drive 'file=/dev/zvol/ssd/vm/vm-200-disk-0,if=none,id=drive-virtio0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \
  -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,iothread=iothread-virtio0,bootindex=100' \
  -netdev 'type=tap,id=net0,ifname=tap200i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=82:7E:85:6E:45:35,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256,bootindex=300' \
  -machine 'type=pc+pve0'
 
Last edited:
I just did this and landed again at the "Booting from Hard Disk..." endlessly waiting. Both LVM and ZFS do this.
The earlier errors you posted where about invocation of QEMU timing out. So I'd say getting this far is at least a bit of progress.

I also can't mount the dataset like you suggested:
Code:
root@proxmox:~# zfs mount ssd/vm/vm-201-disk-0
cannot open 'ssd/vm/vm-201-disk-0': operation not applicable to datasets of this type

Any way to force a mount of the dataset? I definitely need to access the data in order to make new VMs...
This is not a ZFS filesystem, but a virtual block device. You need to mount the corresponding partition in /dev/zvol/, should be something like /dev/zvol/ssd/vm/vm-201-disk-0-part<N>.
Did you manage to mount the file system and check if it's okay?
 
Yes, I mounted the ZFS filesystem and my files all seem to be okay. I have backups of my VMs and files. But I cannot start existing or create new VMs, both on ZFS and LVM filesystems...
 
Last edited:
The system has a total of 512GB memory. The ZFS ARC is set to use maximum of ~400GB. At the time of writing I have about 90GB of RAM free and unused on the Proxmox host.

The other hardware:

CPU: AMD Epyc 7402P (24 core)
Motherboard: Supermicro H11SSL-NC (board revision 2.0)
RAM: 8x Samsung 64GB M386A8K40CM2-CTD7Y 4DRx4 PC4-2666V LRDIMM ECC REG PC4-21300 DDR4
Boot SSD: Intel Optane 905p 380GB M.2 SSD (boot LVM partition)
+ a range of SATA connected hard drives and SSDs, all managed by ZFS.

These are the contents of /etc/modprobe.d/zfs.conf:

Code:
options zfs zfs_special_class_metadata_reserve_pct=15

options zfs zfs_max_recordsize=4194304

options zfs zfs_arc_min=268435456000
options zfs zfs_arc_max=397284474880

options zfs zfs_dirty_data_max=161061273600
options zfs zfs_dirty_data_max_max=171798691840
options zfs zfs_dirty_data_sync_percent=78

options zfs zfs_vdev_async_write_active_min_dirty_percent=80
options zfs zfs_vdev_async_write_active_max_dirty_percent=90
options zfs zfs_delay_min_dirty_percent=90


options zfs zfs_vdev_scrub_max_active=12

options zfs zfs_vdev_scrub_min_active=4

options zfs l2arc_write_boost=209715200

options zfs l2arc_write_max=209715200
options zfs l2arc_noprefetch=0
 
Hello Everyone,

I too have recently encountered this same issue with Proxmox VE 8. Despite having run it for several months without issues, I am experiencing this problem following a fresh install of Proxmox. I did make sure there aren't any pending/outstanding updates needing to be installed on Proxmox, prior to even creating any VMs. Interestingly, my pfSense VM boots up without any hitches. However, the Debian-based VM I have fails to boot and gets stuck at the "Booting from Hard Disk..." stage.

This issue arose after I performed a hard reboot on the Proxmox node after it got hung while attempting to shut down the pfSense VM during a graceful reboot of the node itself. Please note, the node didn't hang while shutting down the other VM during the process of gracefully rebooting. The only VM that caused the hang was the pfSense VM.

Prior to the reboot, I only had a pfSense VM and one Debian-based VM which were both running smoothly without any noticeable problems. The only change I made leading up to the reboot was to the network configuration on the node, although I don't believe this should affect the VMs' ability to boot.

I've attempted to resolve the issue by creating a new Debian-based VM and even tried different configuration settings, including using the same configuration settings as what's being used on the pfSense VM, but to no avail.

As a newcomer to this community, I'd appreciate any guidance on what information would be most useful to share in order to troubleshoot this issue. I'm more than willing to provide any necessary details.

Thank you in advance for any assistance.
 
  • Like
Reactions: pitchalt
Hello Everyone,

I too have recently encountered this same issue with Proxmox VE 8. Despite having run it for several months without issues, I am experiencing this problem following a fresh install of Proxmox. I did make sure there aren't any pending/outstanding updates needing to be installed on Proxmox, prior to even creating any VMs. Interestingly, my pfSense VM boots up without any hitches. However, the Debian-based VM I have fails to boot and gets stuck at the "Booting from Hard Disk..." stage.
Could you please post the output of the following commands (replacing VMID_DEBIAN and VMID_PFSENSE with the respective VMID):
Code:
pveversion -v
qm config VMID_DEBIAN --current
qm config VMID_PFSENSE --current

Can you also try to create a new VM with an empty disk, without a network device and without an ISO, and start it? The expected behavior would be a boot loop (the VM repeatedly tries to boot from disk, does not find an OS, and reboots). Does this VM also hang at "Booting from hard disk..."?
 
  • Like
Reactions: pitchalt
Could you please post the output of the following commands (replacing VMID_DEBIAN and VMID_PFSENSE with the respective VMID):
Code:
pveversion -v
qm config VMID_DEBIAN --current
qm config VMID_PFSENSE --current

Can you also try to create a new VM with an empty disk, without a network device and without an ISO, and start it? The expected behavior would be a boot loop (the VM repeatedly tries to boot from disk, does not find an OS, and reboots). Does this VM also hang at "Booting from hard disk..."?
Faced the same problem after upgrading from 7.4 to 8.1
There was a network failure during the upgrade process.
After the reboot, the containers start normally, the qemu vm does not start. I tried using the 5.15.143-1-pve kernel, it didn't help
 
There was a network failure during the upgrade process.


Then check the Proxmox OS system:

1., check old packages, that remains
Code:
# proxmox7 to proxmox8 ( deb11 to deb12 ), so there should be no "*.deb11" packages remain #
$> dpkg --list | grep .*deb11 | awk '{print $2}'

# If this return some packages, remove it #
$> apt purge `dpkg --list | grep .*deb11 | awk '{print $2}'`

# Run update again #
$> apt-get clean all
$> apt-get autoclean
$> apt-get update
$> apt-get upgrade

2., Check installed packages integrity
Code:
$> apt-get install debsums

# Run, check errors #
$> debsums -s

# Example: Fixing missing files error from "debsums" #
$> apt-get reinstall <package>
 
Last edited:
  • Like
Reactions: pitchalt
Then check the Proxmox OS system:

1., check old packages, that remains
Code:
# proxmox7 to proxmox8 ( deb11 to deb12 ), so there should be no "*.deb11" packages remain #
$> dpkg --list | grep .*deb11 | awk '{print $2}'

# If this return some packages, remove it #
$> apt purge `dpkg --list | grep .*deb11 | awk '{print $2}'`

# Run update again #
$> apt-get clean all
$> apt-get autoclean
$> apt-get update
$> apt-get upgrade

2., Check installed packages integrity
Code:
$> apt-get install debsums

# Run, check errors #
$> debsums -s
Thank you very much for the recommendations.
I reinstalled the system from scratch, everything works as it should.

I wanted to point out if it might help someone.
My supermicro server is old E5-2667v2, the installer from 8.1 does not work with the built-in graphics adapter, but 7.4 installs perfectly.
I installed 7.4, then upgraded it to 8.1, and so far everything is working.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!