HELP NEEDED: Booting from Hard Disk...

bliapis

New Member
Jul 14, 2021
5
0
1
45
#Edit From Update#2: Before the issue started happening I was able to reboot a VM normaly which i was experimenting with but after 4 or so reboots, it got stuck to "Booting from Hard Disk.." and so any other which unfortunately had to get rebooted. Does it ring any bells? The Nodes havent been restarted yet



Hello and thank you in advance.

Ten days ago I upgraded the cluster to version "7" using the GUI update faculty.
Except from a simple corosync failure all went well and good operation was re-establish shortly.

Fast forward to .. yesterday.

Answering a task where I was to perform work on an Ubuntu VM, which involved enlarging the hard-drive, the VM stopped the restart procedure and remained stuck on the following message:

SeaBIOS (version rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org)
Machine UUID dd36cff9-f00d-4a9e-9340-b194885ad1e1
Booting From Hard Disk...


No matter what I tried - from enabling cache on the Hard Drive (scsi0) as writethrough to selecting a different type of Processor (default is kvm64), to OVMF BIOS to a Default Controller Type, it all ended up either unbootable or stuck with the previous message.

Even changing the bootorder from Options to CDROM and trying to load a Live Image will fail, with the Live Image begining to load but keep on trying indefinitely.

I would like to add that I also experienced issues with creating a new VM as in the same behavior - stuck on Booting from Hard Disk.
As our configs are based on no- Hard Disk cache, it was on the new VM that I found that it will load a Live CD if cache=writethrough was active. Albeit it didnt help with the VM where all started...

In the case of the new VM, the installation will carry on (only if cache is enabled as writethrough) but installation seems to go for ever. The Live log shows as "Configuring grub-pc (amd64) for hours now. When I expand said log i can see that it tries to resync VMMouse driver and trying to resolve the guest's machine Domain (Server returned error NODOMAIN, mitigating potential DNS violation DVE-)

My PVEVERSION:

proxmox-ve: 7.0-2 (running kernel: 5.4.128-1-pve) pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e) pve-kernel-5.11: 7.0-6 pve-kernel-helper: 7.0-6 pve-kernel-5.4: 6.4-5 pve-kernel-5.11.22-3-pve: 5.11.22-6 pve-kernel-5.4.128-1-pve: 5.4.128-1 pve-kernel-5.4.119-1-pve: 5.4.119-1 pve-kernel-5.4.78-2-pve: 5.4.78-2 pve-kernel-5.4.55-1-pve: 5.4.55-1 pve-kernel-5.4.34-1-pve: 5.4.34-2 ceph-fuse: 14.2.21-1 corosync: 3.1.2-pve2 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown: residual config ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.21-pve1 libproxmox-acme-perl: 1.2.0 libproxmox-backup-qemu0: 1.2.0-1 libpve-access-control: 7.0-4 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.0-5 libpve-guest-common-perl: 4.0-2 libpve-http-server-perl: 4.0-2 libpve-storage-perl: 7.0-10 libqb0: 1.0.5-1 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.9-4 lxcfs: 4.0.8-pve2 novnc-pve: 1.2.0-3 proxmox-backup-client: 2.0.8-1 proxmox-backup-file-restore: 2.0.8-1 proxmox-mini-journalreader: 1.2-1 proxmox-widget-toolkit: 3.3-6 pve-cluster: 7.0-3 pve-container: 4.0-9 pve-docs: 7.0-5 pve-edk2-firmware: 3.20200531-1 pve-firewall: 4.2-2 pve-firmware: 3.2-4 pve-ha-manager: 3.3-1 pve-i18n: 2.4-1 pve-qemu-kvm: 6.0.0-3 pve-xtermjs: 4.12.0-1 qemu-server: 7.0-13 smartmontools: 7.2-pve2 spiceterm: 3.2-2 vncterm: 1.7-1 zfsutils-linux: 2.0.5-pve1

And the critical VM's config:

acpi: 1 bios: seabios boot: order=scsi0;ide2;net0 cores: 4 cpu: kvm64 description: Restart Authorization -> Manias Dhmos8enhs%0A%0AVM INCLUDES%3A%0A192.168.10.216%0AUBUNTU SERVER%0A BACKEND JAVA%0A MYSQL%0A PHPMYADMIN%0A MYDATA API JAVA%0A NGINX ide2: none,media=cdrom kvm: 1 memory: 4096 name: test-server net0: e1000=C2:96:FA:C3:C5:1E,bridge=vmbr1,firewall=1 numa: 0 onboot: 1 ostype: l26 scsi0: zpool3:vm-401-disk-0,size=50G scsihw: virtio-scsi-pci smbios1: uuid=989fea08-1749-477f-9f95-039a98a7e540 sockets: 1 tablet: 0 unused0: zpool3:vm-401-disk-1 vmgenid: 082b6cfa-75bf-4fe8-b425-60abe20c4558

Thanx for reading through this huge wall of text, I really hope that someone will chime in with an answer or insight to get the VM going again. Im sure there is a link with the behavior of the critical VM and the new VM(s) but I can't understand whats really going wrong.
 
Last edited:
Update:
It so happens that any VM that for some reason has to be stopped/rebooted, will enter the same "Booting from Hard Disk..." state. This is indeed catastrophical for no good reason..
 
Last edited:
Update2:
It seems that one server is unaffected. It performs nominaly, can reboot/create VMs successfully.
All servers are up to date with the latest non-sub versions.

On the affected servers, even trying to run a Linux Live CD is slow and sluggish. For example, selecting an Ubuntu Desktop 21.04 to load first in the boot-order, after showing the initial orange screen with a battery(i think) icon on the bottom, a black screen afterwards (which took some times to move on from there), it gets stuck at the black Ubuntu screen with the spinning circle at the mid and "Ubuntu" test on the bottom of the screen.

Also #>zpool status, shows
Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable. Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(5) for details. , something i havent tried before and rather scared to do so. Don't really wanna wade in deeper water than I already am, cause as far as I have read that could break the boot on the PVE if its not an UEFI installation. I checked with lsblk and /dev/sdb2 does contain a 512mb partition that should be the boot partition but again this is all unexplored space for me and I'd rather just fix the problem with booting the VMs instead of creating new issues.

#Edit: Before the issue started happening I was able to reboot a VM normaly which i was experimenting with but after 4 or so reboots, it got stuck to "Booting from Hard Disk.." and so any other which unfortunately had to get rebooted. Does it ring any bells? The Nodes havent been restarted yet

Any ideas guys? I really am in dire straits here :/
 
Last edited:
Hi, I am experiencing the same issue, perhaps because the server has been upgraded to Proxmox 7 but it haven't been rebooted.

Notice you are using old kernel:

proxmox-ve: 7.0-2 (running kernel: 5.4.128-1-pve)

Did you solved the issue? Maybe rebooting the server?

Regards,
 
thankyou

just did a 6 to 7 as 6 is EOL.

And was stuck in this booting from hard disk. you have saved me.

Rebooting server fixed issues
 
Hi experiencing same issue right now on proxmox ve8 but i tried to restart my node still get that problem

but only one of the vm is have that problem but i need to get the file

can someone help me
 
Hi experiencing same issue right now on proxmox ve8 but i tried to restart my node still get that problem

but only one of the vm is have that problem but i need to get the file

can someone help me
it was the same here and indeed a reboot of the host fixed the issue
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!