[SOLVED] VM doesn't boot when "Windows" is selected for OS type

Breymja

Well-Known Member
Aug 14, 2017
58
1
48
33
Hello,
my Windows VM does no longer boot with Windows as OS Type in the options, i only get:
(This happens since a few months where it suddenly happened, didn't need the VM until now, so i thought it might be a temporary bug and be fixed in the meanwhile)

TASK ERROR: start failed: command '/usr/bin/kvm -id 101 -name server-03 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=152c9b9c-2732-4096-a9a6-fe2a829cb335' -drive 'if=pflash,unit=0,format=raw,readonly,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=131072,file=/tmp/101-ovmf.fd' -smp '8,sockets=1,cores=8,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/101.vnc,password -no-hpet -cpu 'host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt' -m 12768 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=ab8eee4b-3ec1-42c4-a920-8daf270c9b71' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -chardev 'socket,path=/var/run/qemu-server/101.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:205ef1ae0b3' -drive 'if=none,id=drive-ide0,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/zvol/rpool/data/vm-101-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=EE:B9:3A:50:27:A0,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -machine 'type=pc+pve0' -global 'kvm-pit.lost_tick_policy=discard'' failed: got timeout

It still is displayed as "running" in Proxmox.
Trying to connect via the VNC Viewer yields:

VM 101 qmp command 'change' failed - unable to connect to VM 101 qmp socket - timeout after 599 retries
TASK ERROR: Failed to run vncproxy.

Despite being displayed as running, remote desktop doesn't work.

Changing the OS Type to "Other" will lead to the VM starting and working again.

The same does happen if i simply create a new default VM with Windows OS Type. It doesn't boot. The moment i change to other it does.
Can i somehow find out what's wrong? Is something known?
 
Last edited:
What hardware are you using? Especially which CPU?

Could you potentially try starting the VM with "Windows" as type but manually remove the "hv_" flags one by one to see if they are the cause of the issue? E.g.:

Code:
# qm showcmd --pretty > temp.sh
# $EDITOR temp.sh   # editor of your choice, remove hv_* entries from "-cpu"
# sh temp.sh
 
Hm, the 9900k should support all hyper-v enlightenments (the hv_ flags). Very mysterious.

Could you post you VM config (/etc/pve/qemu-server/<vmid>.conf) and the output of 'pveversion -v'?

Alternatively, you can of course just run your VMs with ostype=other, but keep in mind that for Windows guests performance might not be optimal then.
 
Hello @Stefan_R,
as it worked when i first installed ProxMox last June up until around February this year, i'd love to get it working again.
I've also checked if Hardware Virtualisation is turned on, which it is. (Just in case)

Here the requested data:
root@server-01 ~ # pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.0.15-1-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-3
pve-kernel-helper: 6.2-3
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-7
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

and

agent: 1
bios: ovmf
bootdisk: scsi0
cores: 8
cpu: host
ide0: none,media=cdrom
memory: 32768
name: server-03
net0: virtio=EE:B9:3A:50:27:A0,bridge=vmbr0,rate=60
numa: 0
onboot: 1
ostype: win10
parent: ecodmsinstalled
scsi0: local-zfs:vm-101-disk-0,discard=on,size=466G
scsihw: virtio-scsi-pci
smbios1: uuid=152c9b9c-2732-4096-a9a6-fe2a829cb335
sockets: 1
vmgenid: fc839250-3d5b-4de9-a934-73d64b5b2503

[ecodms]
agent: 1
bios: ovmf
bootdisk: scsi0
cores: 8
cpu: host
ide0: none,media=cdrom
memory: 32768
name: server-03
net0: virtio=EE:B9:3A:50:27:A0,bridge=vmbr0,rate=60
numa: 0
onboot: 1
ostype: win10
runningmachine: pc-i440fx-4.0
scsi0: local-zfs:vm-101-disk-0,discard=on,size=466G
scsihw: virtio-scsi-pci
smbios1: uuid=152c9b9c-2732-4096-a9a6-fe2a829cb335
snaptime: 1563660615
sockets: 1
vmgenid: 533a6a14-b79b-4e89-9faa-7734f0f729ad
vmstate: local-zfs:vm-101-state-ecodms

[ecodmsinstalled]
agent: 1
bios: ovmf
bootdisk: scsi0
cores: 8
cpu: host
ide0: none,media=cdrom
memory: 32768
name: server-03
net0: virtio=EE:B9:3A:50:27:A0,bridge=vmbr0,rate=60
numa: 0
onboot: 1
ostype: win10
parent: ecodms
runningmachine: pc-i440fx-4.0
scsi0: local-zfs:vm-101-disk-0,discard=on,size=466G
scsihw: virtio-scsi-pci
smbios1: uuid=152c9b9c-2732-4096-a9a6-fe2a829cb335
snaptime: 1563662361
sockets: 1
vmgenid: 533a6a14-b79b-4e89-9faa-7734f0f729ad
vmstate: local-zfs:vm-101-state-ecodmsinstalled

Side-Question:
From the version output i see it's using Kernel 5.0. The boot folder suggests i got 5.0.21, 5.3.18, 5.4.41 and 5.4.44 installed as well. Is there any manual intervention by me needed for it to actually boot with the newest kernel? Never seen that behaviour before. My Ubuntu and Debian machines simply use the newest kernel and keep the last few in case issues arise, then delete them when they are no longer required.

(Using ZFS + EFI on NVME-SSDs in Raid 1)
 
Last edited:
I've googled a lot, but couldnt find out how to change the Kernel. The solution that got to it most was "simply delete the unwanted Kernel", but that was for when someone didnt want to use the most recent kernel. My ProxMox is not using the most recent for some reason. I'm also not really wanting to remove it, as none of the other Kernels are tested to be working, as they were never used by ProxMox on that System.

pve-efiboot-tool kernel remove 5.0.15-1-pve
is not working either, as the Kernel is not manually selected, but automatically.
 
Last edited:
Try removing the kernels via apt. E.g.: apt remove pve-kernel-5.0 pve-kernel-5.3, then reboot the node. That might also help with the Hyper-V issues.

If not, you could also try downgrading QEMU, to see if that's the package at fault, e.g.: apt install pve-qemu-kvm=4.2.0-1
 
Hello @Stefan_R,
that does not remove the Kernel currently running, hence on reboot it does again reboot to 5.0.15-1, any other ideas?

Now the only Kernels left are 5.0.15, 5.4.41 and 5.4.44
 
You should be able to remove even the running kernel... Try apt remove pve-kernel-5.0.15-1-pve, if it fails post the full output please.
 
Hello @Stefan_R
Well, it's been a few years, but i've been teached removing a running Kernel is something you never do under no circumstances.

I've did that and rebooted - it still booted into 5.0.15-1. This does not look healthy.

Using username "root".
Authenticating with public key XXXX
Linux server-01 5.0.15-1-pve #1 SMP PVE 5.0.15-1 (Wed, 03 Jul 2019 10:51:57 +0200) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jun 22 11:37:00 2020

pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.0.15-1-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-3
pve-kernel-helper: 6.2-3
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-7
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

No matter the weirdness of it booting an inexisting Kernel - i really believe there should be an easy method to change the Kernel one wants to boot with or it at least should use the newest one.
 
Could you post the output of ls -la /boot (or whereever your boot partition is located)?
 
Hello @Stefan_R,
of course, no Kernel there, though:

root@server-01 ~ # ls -la /boot
total 108232
drwxr-xr-x 5 root root 15 Jun 22 11:34 .
drwxr-xr-x 18 root root 24 Jul 15 2019 ..
-rw-r--r-- 1 root root 237781 May 15 15:06 config-5.4.41-1-pve
-rw-r--r-- 1 root root 237790 Jun 12 08:18 config-5.4.44-1-pve
drwxr-xr-x 2 root root 2 Jul 15 2019 efi
drwxr-xr-x 5 root root 8 Jun 22 11:34 grub
-rw-r--r-- 1 root root 42095095 Jun 15 03:21 initrd.img-5.4.41-1-pve
-rw-r--r-- 1 root root 42093101 Jun 18 16:58 initrd.img-5.4.44-1-pve
-rw-r--r-- 1 root root 182704 Jun 25 2015 memtest86+.bin
-rw-r--r-- 1 root root 184840 Jun 25 2015 memtest86+_multiboot.bin
drwxr-xr-x 2 root root 10 Jun 18 16:58 pve
-rw-r--r-- 1 root root 4712082 May 15 15:06 System.map-5.4.41-1-pve
-rw-r--r-- 1 root root 4712811 Jun 12 08:18 System.map-5.4.44-1-pve
-rw-r--r-- 1 root root 11630976 May 15 15:06 vmlinuz-5.4.41-1-pve
-rw-r--r-- 1 root root 11635072 Jun 12 08:18 vmlinuz-5.4.44-1-pve
 
Oh, btw, since i removed this Kernel and rebooted my VMs do no longer start:
TASK ERROR: KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS.

//EDIT
Reinstalling the Kernel, then rebooting, fixed that issue. I need my VMs to be online, so i had to reinstall it.
 
Are you sure you rebooted? uptime says?

Also, which boot method are you using? If UEFI w/ ZFS, try doing a pve-efiboot-tool refresh

Edit: The KVM error is expected, as the KVM module got deleted along with the old kernel
 
Yes, i'm sure - i have ordered a Remote Console (LARA) to watch it.

Okay, then i'm going to remove the Kernel again, then do pve-efiboot-tool refresh, let's see what happens.
(Yes, UEFI with ZFS-RAID1 on NVME)

//EDIT:
root@server-01 ~ # pve-efiboot-tool refresh
Running hook script 'pve-auto-removal'..
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.
 
Last edited:
Nope, doesn't work:
Linux server-01 5.0.15-1-pve #1 SMP PVE 5.0.15-1 (Wed, 03 Jul 2019 10:51:57 +0200) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Jun 22 12:13:30 2020

root@server-01 ~ # uptime
12:17:02 up 1 min, 1 user, load average: 0.04, 0.02, 0.00

root@server-01 ~ # pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.0.15-1-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-3
pve-kernel-helper: 6.2-3
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-7
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

//EDIT:
Just to be clear - this is a default installation via the ISO with zero manual changes.
 
Last edited:
root@server-01 ~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
zd0 230:0 0 466G 0 disk
├─zd0p1 230:1 0 499M 0 part
├─zd0p2 230:2 0 99M 0 part
├─zd0p3 230:3 0 16M 0 part
└─zd0p4 230:4 0 465.4G 0 part
zd16 230:16 0 64.5G 0 disk
zd32 230:32 0 466G 0 disk
├─zd32p1 230:33 0 94M 0 part
├─zd32p2 230:34 0 954M 0 part
├─zd32p3 230:35 0 954M 0 part
└─zd32p4 230:36 0 464G 0 part
zd48 230:48 0 64.5G 0 disk
zd64 230:64 0 1M 0 disk
zd80 230:80 0 1M 0 disk
nvme1n1 259:0 0 953.9G 0 disk
├─nvme1n1p1 259:2 0 1007K 0 part
├─nvme1n1p2 259:3 0 512M 0 part
└─nvme1n1p3 259:4 0 953.4G 0 part
nvme0n1 259:1 0 953.9G 0 disk
├─nvme0n1p1 259:5 0 1007K 0 part
├─nvme0n1p2 259:6 0 512M 0 part
└─nvme0n1p3 259:7 0 953.4G 0 part

zd0 should be the Windows VM with its partitions, zd32 should be the Linux VM. zd16 and 48 probably are the windows VMs two snapshots. zd64 and 80 are the efi-disk for both vms. p1 should be efi, p2 should be boot and p3 should be /. (Not totally sure, mobile atm)
 
Last edited:
Alright, then try the following:

Code:
pve-efiboot-tool init /dev/nvme0n1p2
pve-efiboot-tool init /dev/nvme1n1p2
pve-efiboot-tool refresh
reboot

That should re-initialize your EFI partitions, write their IDs to '/etc/kernel/pve-efiboot-uuids' and thus load the new kernel on reboot.

No clue how the pve-efiboot-uuids file would have been corrupted, unless you did some funky stuff manually.
 
  • Like
Reactions: Breymja

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!