Windows VMs stuck on boot after Proxmox Upgrade to 7.0

So I am newer to proxmox, but how do I install/get QEMU 7.0 on my proxmox cluster running Enterprise subscription?
The simplest method ... Add the test repository via gui, update, DO NOT upgrade otherwise install everything in testing. Via cli ... apt install pve-qemu-kvm then deactivate the test repository.
 
Upgraded to 7 and still have the issue... Any other suggestions? On a reboot it just sits there stuck with less than 1% cpu doing nothing but showing a black screen. I've tried to use both a host and a kvm64 processor... both produced the same result.
 
Last edited:
Hello,

Upgraded to 7 and still have the issue... Any other suggestions? On a reboot it just sits there stuck with less than 1% cpu doing nothing but showing a black screen. I've tried to use both a host and a kvm64 processor... both produced the same result.
Just to make sure for the following:
1- When you upgrade to QEMU version 7, have you stopped and started the VM? This is required in order to take the new effect from the new version. (You can bypass stop start if you online migrate the VM to another node).
2- may you please provide us with the output of pveversion -v command and the vm config qm config <VMID>
 
Hello,


Just to make sure for the following:
1- When you upgrade to QEMU version 7, have you stopped and started the VM? This is required in order to take the new effect from the new version. (You can bypass stop start if you online migrate the VM to another node).
2- may you please provide us with the output of pveversion -v command and the vm config qm config <VMID>
1) - Yes, in fact i downed the server after update and brought it up fresh.

2) - pveversion output:
Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.39-3-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-8
pve-kernel-helper: 7.2-8
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-15
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.4.174-2-pve: 5.4.174-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-7
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.5-1
proxmox-backup-file-restore: 2.2.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1


VM Config:
Code:
#args: -machine smm=off
agent: 1
balloon: 0
bios: ovmf
boot: order=sata0;ide2
cores: 8
cpu: kvm64,flags=+aes
efidisk0: VM_Storage:vm-100-disk-0,size=4M
hostpci0: 0000:02:00,pcie=1,x-vga=1
hostpci1: 0000:03:00
hostpci2: 0000:06:00
ide2: none,media=cdrom
machine: pc-q35-7.0
memory: 32768
name: Windows-Server
net0: e1000=DE:1B:D3:58:51:9E,bridge=vmbr1
net1: virtio=32:5C:89:09:AF:6D,bridge=vmbr2
net2: virtio=66:25:BE:C1:33:60,bridge=vmbr1,tag=30
net3: virtio=FE:2A:8E:EC:E0:2C,bridge=vmbr1,tag=40
numa: 0
onboot: 1
ostype: win10
sata0: VM_Storage:vm-100-disk-1,size=218476M,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=<xxxxxx>
sockets: 1
startup: order=5,up=300
vga: none
vmgenid: <xxxxxx>
vmstatestorage: VM_Storage

The parts being passed through are a Nvidia Quadro K2200, USB Card, and MegaRaid 9265-8i Controller. I've had these passed since day one with this VM with no issues... Well until now... but i dont believe they are a problem. Also I've tried the smm=off and it made no difference.
 
Last edited:
Ok, I just did the updates for today... so something else must have needed updated because its all working! Thank you!
 
Hi,

I had the stuck reboot problem today on 2vms.

uptime of both vms was 70 days.

kernel: 5.13.19-2-pve
qemu version : 6.2.0-11

I have tried to reset/live migrate vm/reset multiple times between 2 unpatched nodes, the vms was always stuck at boot.


Then , I have upgraded qemu with patch (6.2.0-11 , the patched version I have provided previously in this thread forum)

Live migrate vm to the patched node
reset the vm
And finally it's booting.

So, I really think that the bug it's fixed. (you can upgrade to qemu 7.X which include the bug, or use my patched 6.2 version)
 
Hi,

I had the stuck reboot problem today on 2vms.

uptime of both vms was 70 days.

kernel: 5.13.19-2-pve
qemu version : 6.2.0-11

I have tried to reset/live migrate vm/reset multiple times between 2 unpatched nodes, the vms was always stuck at boot.


Then , I have upgraded qemu with patch (6.2.0-11 , the patched version I have provided previously in this thread forum)

Live migrate vm to the patched node
reset the vm
And finally it's booting.

So, I really think that the bug it's fixed. (you can upgrade to qemu 7.X which include the bug, or use my patched 6.2 version)

... after 369 posts, seeing the phrase "the problem is solved" makes me a little touched.
 
"On Qemu 7.0 ?" are you referring to the Machine version in the GUI? -
Its not a solution to upgrade the machine version because of the issues with missing drives / network setting after change the Machine version to 7.0
 
"On Qemu 7.0 ?" are you referring to the Machine version in the GUI? -
Its not a solution to upgrade the machine version because of the issues with missing drives / network setting after change the Machine version to 7.0
No, I am referring to the "QEMU" software that you use on the server. In the enterprise repository it is not yet available, only in the CE one. You need to upgrade "pve-qemu-kvm" from release 6.2.0-11 to 7.0-2 (see the posts above).
 
Hi,

I had the stuck reboot problem today on 2vms.

uptime of both vms was 70 days.

kernel: 5.13.19-2-pve
qemu version : 6.2.0-11

I have tried to reset/live migrate vm/reset multiple times between 2 unpatched nodes, the vms was always stuck at boot.


Then , I have upgraded qemu with patch (6.2.0-11 , the patched version I have provided previously in this thread forum)

Live migrate vm to the patched node
reset the vm
And finally it's booting.

So, I really think that the bug it's fixed. (you can upgrade to qemu 7.X which include the bug, or use my patched 6.2 version)

I can confirm !!!

Two nodes cluster, Node 1: Proxmox 7.2 enterprise full upgrade, Node 2: Proxmox 7.2 enterprise full upgrde but with Qemu 7.0.

VM Windows 10 running on node 1 (25 days) . Reboot VM: spinning circle. Reset: spinning circle.
Move the VM on node 2: reset: OK !
 
  • Like
Reactions: t.lamprecht
Note that we moved QEMU 7.0 to the enterprise repository just now, it ran well since about two months internally on all our infrastructure and well over a week with quite a bit of positive feedback on no-subscription.
 
  • Like
Reactions: itNGO

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!