#Edit From Update#2: Before the issue started happening I was able to reboot a VM normaly which i was experimenting with but after 4 or so reboots, it got stuck to "Booting from Hard Disk.." and so any other which unfortunately had to get rebooted. Does it ring any bells? The Nodes havent been restarted yet
Hello and thank you in advance.
Ten days ago I upgraded the cluster to version "7" using the GUI update faculty.
Except from a simple corosync failure all went well and good operation was re-establish shortly.
Fast forward to .. yesterday.
Answering a task where I was to perform work on an Ubuntu VM, which involved enlarging the hard-drive, the VM stopped the restart procedure and remained stuck on the following message:
SeaBIOS (version rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org)
Machine UUID dd36cff9-f00d-4a9e-9340-b194885ad1e1
Booting From Hard Disk...
No matter what I tried - from enabling cache on the Hard Drive (scsi0) as writethrough to selecting a different type of Processor (default is kvm64), to OVMF BIOS to a Default Controller Type, it all ended up either unbootable or stuck with the previous message.
Even changing the bootorder from Options to CDROM and trying to load a Live Image will fail, with the Live Image begining to load but keep on trying indefinitely.
I would like to add that I also experienced issues with creating a new VM as in the same behavior - stuck on Booting from Hard Disk.
As our configs are based on no- Hard Disk cache, it was on the new VM that I found that it will load a Live CD if cache=writethrough was active. Albeit it didnt help with the VM where all started...
In the case of the new VM, the installation will carry on (only if cache is enabled as writethrough) but installation seems to go for ever. The Live log shows as "Configuring grub-pc (amd64) for hours now. When I expand said log i can see that it tries to resync VMMouse driver and trying to resolve the guest's machine Domain (Server returned error NODOMAIN, mitigating potential DNS violation DVE-)
My PVEVERSION:
And the critical VM's config:
Thanx for reading through this huge wall of text, I really hope that someone will chime in with an answer or insight to get the VM going again. Im sure there is a link with the behavior of the critical VM and the new VM(s) but I can't understand whats really going wrong.
Hello and thank you in advance.
Ten days ago I upgraded the cluster to version "7" using the GUI update faculty.
Except from a simple corosync failure all went well and good operation was re-establish shortly.
Fast forward to .. yesterday.
Answering a task where I was to perform work on an Ubuntu VM, which involved enlarging the hard-drive, the VM stopped the restart procedure and remained stuck on the following message:
SeaBIOS (version rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org)
Machine UUID dd36cff9-f00d-4a9e-9340-b194885ad1e1
Booting From Hard Disk...
No matter what I tried - from enabling cache on the Hard Drive (scsi0) as writethrough to selecting a different type of Processor (default is kvm64), to OVMF BIOS to a Default Controller Type, it all ended up either unbootable or stuck with the previous message.
Even changing the bootorder from Options to CDROM and trying to load a Live Image will fail, with the Live Image begining to load but keep on trying indefinitely.
I would like to add that I also experienced issues with creating a new VM as in the same behavior - stuck on Booting from Hard Disk.
As our configs are based on no- Hard Disk cache, it was on the new VM that I found that it will load a Live CD if cache=writethrough was active. Albeit it didnt help with the VM where all started...
In the case of the new VM, the installation will carry on (only if cache is enabled as writethrough) but installation seems to go for ever. The Live log shows as "Configuring grub-pc (amd64) for hours now. When I expand said log i can see that it tries to resync VMMouse driver and trying to resolve the guest's machine Domain (Server returned error NODOMAIN, mitigating potential DNS violation DVE-)
My PVEVERSION:
proxmox-ve: 7.0-2 (running kernel: 5.4.128-1-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-6
pve-kernel-helper: 7.0-6
pve-kernel-5.4: 6.4-5
pve-kernel-5.11.22-3-pve: 5.11.22-6
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.2.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-5
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-10
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.8-1
proxmox-backup-file-restore: 2.0.8-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-9
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
And the critical VM's config:
acpi: 1
bios: seabios
boot: order=scsi0;ide2;net0
cores: 4
cpu: kvm64
description: Restart Authorization -> Manias Dhmos8enhs%0A%0AVM INCLUDES%3A%0A192.168.10.216%0AUBUNTU SERVER%0A BACKEND JAVA%0A MYSQL%0A PHPMYADMIN%0A MYDATA API JAVA%0A NGINX
ide2: none,media=cdrom
kvm: 1
memory: 4096
name: test-server
net0: e1000=C2:96:FA:C3:C5:1E,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: zpool3:vm-401-disk-0,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=989fea08-1749-477f-9f95-039a98a7e540
sockets: 1
tablet: 0
unused0: zpool3:vm-401-disk-1
vmgenid: 082b6cfa-75bf-4fe8-b425-60abe20c4558
Thanx for reading through this huge wall of text, I really hope that someone will chime in with an answer or insight to get the VM going again. Im sure there is a link with the behavior of the critical VM and the new VM(s) but I can't understand whats really going wrong.
Last edited: