Windows VMs stuck on boot after Proxmox Upgrade to 7.0

Interestingly, if I make a non-applicable hot change, like this one, and I perform a motion of the vm on another node, does the change remain in the "not applied" state because it waits for the vm to stop start, instead is it applied the same during the motion? So is it just an "aesthetic" bug? The thing is interesting, if I have to apply this workaround to a consistent number of VMs there is a lot of difference. Regardless, what matters is solving the age-old problem. So let's try this workaround.
No, you're right. Unfortunately, live-migration is not enough to have the change applied in this case.

@Paolo Marinelli I'm afraid the setting is not in effect for you yet. I'd also not switch over production machines just like that. The setting will affect the time settings of the VM, so you need to be a bit careful.

I'll edit my post with regard to these points.
 
Interestingly, if I make a non-applicable hot change, like this one, and I perform a motion of the vm on another node, does the change remain in the "not applied" state because it waits for the vm to stop start, instead is it applied the same during the motion? So is it just an "aesthetic" bug? The thing is interesting, if I have to apply this workaround to a consistent number of VMs there is a lot of difference. Regardless, what matters is solving the age-old problem. So let's try this workaround.
I'dont know, i did change manually editing /etc/pve/qemu-server/*conf its very quicker
 
No, you're right. Unfortunately, live-migration is not enough to have the change applied in this case.

@Paolo Marinelli I'm afraid the setting is not in effect for you yet. I'd also not switch over production machines just like that. The setting will affect the time settings of the VM, so you need to be a bit careful.

I'll edit my post with regard to these points.
aghh
 
Hi, i just shutdown (poweroff) and restart all my VM with Windows 2019 and suggested modification (localtime:0)
how long before shutdown -r on all VM to see if workaround works ? ( 1 weeks ? )
VM start without spice audio agent (itsnt a problem for me but ...)

audio: Could not init `spice' audio driver
audio: warning: Using timer based audio emulation
 
Hi, i just shutdown (poweroff) and restart all my VM with Windows 2019 and suggested modification (localtime:0)
how long before shutdown -r on all VM to see if workaround works ? ( 1 weeks ? )
VM start without spice audio agent (itsnt a problem for me but ...)

audio: Could not init `spice' audio driver
audio: warning: Using timer based audio emulation

> how long before shutdown -r on all VM to see if workaround works ? ( 1 weeks ? )

I think more than 2 weeks
 
Hi,
how many of you tried setting localtime: 0 as a potential workaround? (EDIT: Of course you need to stop/start the VM for the setting to actually takes effect. Note that the change will affect the time seen by the guest, so be careful when applying this to production VMs).
I'm asking because there's post #82 suggesting this. And now there's a new report where changing the guest OS type to other helped. One of the few things changing the OS type after VM creation affects is the default value of the localtime setting (Use local time for RTC in the UI). It's only enabled by default when the OS type is Windows.
We tried the localtime: 0 option a few months ago (the setting is still there in the VM config file), we still had outtages after we set the option. So this did not worked for us.
 
  • Like
Reactions: weehooey-bh
I have some cluster with 6.4 with the same Problem.
@wolfgang5505 Please review this thread carefully and the bug report related to it (link in thread). It is unlikely you are experiencing this problem with 6.4. You likely have a different issue.

After you review and confirm you are having the same issue, then you should post detailed information on your setup because it may provide valuable clues because you would be unique as everyone else has confirmed that it only happens on 7.x
 
@wolfgang5505 Please review this thread carefully and the bug report related to it (link in thread). It is unlikely you are experiencing this problem with 6.4. You likely have a different issue.

After you review and confirm you are having the same issue, then you should post detailed information on your setup because it may provide valuable clues because you would be unique as everyone else has confirmed that it only happens on 7.x
Here is my configuration, its a five node cluster:
proxmox-ve: 6.4-1 (running kernel: 5.11.22-5-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.11: 7.0-8~bpo10+1
pve-kernel-5.4: 6.4-12
pve-kernel-helper: 6.4-12
pve-kernel-5.11.22-5-pve: 5.11.22-10~bpo10+1
pve-kernel-5.11.22-4-pve: 5.11.22-8~bpo10+1
pve-kernel-5.11.21-1-pve: 5.11.21-1~bpo10
pve-kernel-5.4.162-1-pve: 5.4.162-2
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: not correctly installed
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.7-pve1

and from one windows 2019 server:

agent: 1
bootdisk: scsi0
cores: 4
ide2: none,media=cdrom
memory: 4096
name: KF-DC01
net0: virtio=00:1A:4A:8E:9B:A5,bridge=vmbr1
numa: 0
ostype: win10
scsi0: ssd01:100/vm-100-disk-0.qcow2,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=0b7d2d15-0bcb-4f2d-b4ba-bd2849acef0e
sockets: 1

all have the same behavior, spinning dots after update, stop and start vm works.
 
  • Like
Reactions: weehooey-bh
I was re-reading the various posts and something came to mind.
I installed a cluster in production with Proxmox 7.0 on 1 October 2021 and until mid-December 2021 I have not noticed any problems.

Was it just a coincidence or did some events occur in December 2021?

Going back "to when" to identify what was changed may be helpful in identifying the problem.
 
  • Like
Reactions: weehooey-bh
As dea mentioned it, I tried to remember when I upgraded from 6 to 7, it must be between August and September, but indeed I don't remember to had the problem from beginning.

It apperaed later, but I can't say when or with which update it was.

At first I didn't pay much attention, since it was a Windows VM and as we all know, Windows does have it's own issues, so I assumed it must be a Windows thing. But after a while, when it also affeted the Linux VMs, I looked for a solution and found the thread here, learning that it's a known bug.
 
  • Like
Reactions: weehooey-bh
I installed a cluster in production with Proxmox 7.0 on 1 October 2021 and until mid-December 2021 I have not noticed any problems.
Kernel wise there was quite some change around that time, as the 5.11 was being sunset, 5.13 got the new default and 5.15 was made available opt-in as preview, so depends on how frequent do you upgrade and if you explicitly opted in the new kenerl.

5.15.5-1-pve got uploaded to pve-enterprise on 2021-12-06 if one was already on the then rather fresh 5.15 release, which was then still opt-in), for the 5.13 we got 5.13.19-2-pve around that time (initial upload to testing at end of November) and the EOL 5.11 series got its last release out on 2021-11-07 as 5.11.22-7-pve.

FWIW, you could check your setups /var/log/apt/history.log, and its log-rotated variants, for what got newly in on your setup around that time.
 
  • Like
Reactions: weehooey-bh
Kernel wise there was quite some change around that time, as the 5.11 was being sunset, 5.13 got the new default and 5.15 was made available opt-in as preview, so depends on how frequent do you upgrade and if you explicitly opted in the new kenerl.

5.15.5-1-pve got uploaded to pve-enterprise on 2021-12-06 if one was already on the then rather fresh 5.15 release, which was then still opt-in), for the 5.13 we got 5.13.19-2-pve around that time (initial upload to testing at end of November) and the EOL 5.11 series got its last release out on 2021-11-07 as 5.11.22-7-pve.

FWIW, you could check your setups /var/log/apt/history.log, and its log-rotated variants, for what got newly in on your setup around that time.

OK, these are my logs on that cluster in production (production activation day 1 October 2021, all nodes restarted for pre-production tests, officially in production on the afternoon of 1 October 2021).

*********************************
Start-Date: 2021-10-04 21:30:06
Commandline: apt dist-upgrade
Upgrade: qemu-server:amd64 (7.0-13, 7.0-14), pve-container:amd64 (4.0-9, 4.0-10), libpve-common-perl:amd64 (7.0-6, 7.0-9), pve-kernel-helper:amd64 (7.0-7, 7.1-2)
End-Date: 2021-10-04 21:30:10

Start-Date: 2021-10-12 07:40:45
Commandline: apt dist-upgrade
Install: pve-kernel-5.11.22-5-pve:amd64 (5.11.22-10, automatic)
Upgrade: reportbug:amd64 (7.10.3, 7.10.3+deb11u1), libperl5.32:amd64 (5.32.1-4+deb11u1, 5.32.1-4+deb11u2), libpam-runtime:amd64 (1.4.0-9, 1.4.0-9+deb11u1), krb5-locales:amd64 (1.18.3-6, 1.18.3-6+deb11u1), libgssapi-krb5-2:amd64 (1.18.3-6, 1.18.3-6+deb11u1), pve-firmware:amd64 (3.3-1, 3.3-2), perl:amd64 (5.32.1-4+deb11u1, 5.32.1-4+deb11u2), python3-reportbug:amd64 (7.10.3, 7.10.3+deb11u1), libkrb5support0:amd64 (1.18.3-6, 1.18.3-6+deb11u1), libc6:amd64 (2.31-13, 2.31-13+deb11u2), locales:amd64 (2.31-13, 2.31-13+deb11u2), libkrb5-3:amd64 (1.18.3-6, 1.18.3-6+deb11u1), libpam-modules:amd64 (1.4.0-9, 1.4.0-9+deb11u1), base-files:amd64 (11.1, 11.1+deb11u1), libk5crypto3:amd64 (1.18.3-6, 1.18.3-6+deb11u1), rsync:amd64 (3.2.3-4, 3.2.3-4+deb11u1), libpam-modules-bin:amd64 (1.4.0-9, 1.4.0-9+deb11u1), perl-base:amd64 (5.32.1-4+deb11u1, 5.32.1-4+deb11u2), libpam0g:amd64 (1.4.0-9, 1.4.0-9+deb11u1), libc-l10n:amd64 (2.31-13, 2.31-13+deb11u2), libc-bin:amd64 (2.31-13, 2.31-13+deb11u2), pve-kernel-5.11.22-4-pve:amd64 (5.11.22-8, 5.11.22-9), pve-kernel-5.11:amd64 (7.0-7, 7.0-8), perl-modules-5.32:amd64 (5.32.1-4+deb11u1, 5.32.1-4+deb11u2)
End-Date: 2021-10-12 07:42:22

Start-Date: 2021-10-12 20:09:07
Commandline: apt-get dist-upgrade
Upgrade: pve-firewall:amd64 (4.2-3, 4.2-4)
End-Date: 2021-10-12 20:09:12

Start-Date: 2021-10-21 07:09:04
Commandline: apt dist-upgrade
Upgrade: libproxmox-acme-perl:amd64 (1.3.0, 1.4.0), libpve-storage-perl:amd64 (7.0-11, 7.0-12), proxmox-backup-file-restore:amd64 (2.0.9-2, 2.0.11-1), libpve-access-control:amd64 (7.0-4, 7.0-5), pve-container:amd64 (4.0-10, 4.1-1), libproxmox-acme-plugins:amd64 (1.3.0, 1.4.0), proxmox-backup-client:amd64 (2.0.9-2, 2.0.11-1), libpve-http-server-perl:amd64 (4.0-2, 4.0-3), libpve-common-perl:amd64 (7.0-9, 7.0-10)
End-Date: 2021-10-21 07:09:10

Start-Date: 2021-10-27 08:33:13
Commandline: apt-get dist-upgrade
Upgrade: tzdata:amd64 (2021a-1+deb11u1, 2021a-1+deb11u2)
End-Date: 2021-10-27 08:33:14

Start-Date: 2021-10-29 07:25:06
Commandline: apt upgrade
Upgrade: bind9-host:amd64 (1:9.16.15-1, 1:9.16.22-1~deb11u1), bind9-dnsutils:amd64 (1:9.16.15-1, 1:9.16.22-1~deb11u1), bind9-libs:amd64 (1:9.16.15-1, 1:9.16.22-1~deb11u1)
End-Date: 2021-10-29 07:25:07

*********************************

Start-Date: 2021-11-09 21:22:09
Commandline: apt upgrade
Upgrade: libldb2:amd64 (2:2.2.0-3.1, 2:2.2.3-2~deb11u1), libwbclient0:amd64 (2:4.13.5+dfsg-2, 2:4.13.13+dfsg-1~deb11u2), libsmbclient:amd64 (2:4.13.5+dfsg-2, 2:4.13.13+dfsg-1~deb11u2), python3-ldb:amd64 (2:2.2.0-3.1, 2:2.2.3-2~deb11u1), smbclient:amd64 (2:4.13.5+dfsg-2, 2:4.13.13+dfsg-1~deb11u2), samba-libs:amd64 (2:4.13.5+dfsg-2, 2:4.13.13+dfsg-1~deb11u2), samba-common:amd64 (2:4.13.5+dfsg-2, 2:4.13.13+dfsg-1~deb11u2)
End-Date: 2021-11-09 21:22:11

*********************************

Start-Date: 2021-12-02 07:17:25
Commandline: apt install libnss3
Upgrade: libnss3:amd64 (2:3.61-1, 2:3.61-1+deb11u1)
End-Date: 2021-12-02 07:17:26

Start-Date: 2021-12-29 09:06:22
Commandline: apt-get dist-upgrade
Install: libjs-qrcodejs:amd64 (1.20201119-pve1, automatic), swtpm-libs:amd64 (0.7.0~rc1+2, automatic), libposix-strptime-perl:amd64 (0.13-1+b7, automatic), swtpm-tools:amd64 (0.7.0~rc1+2, automatic), libopts25:amd64 (1:5.18.16-4, automatic), libzpool5linux:amd64 (2.1.1-pve3, automatic), swtpm:amd64 (0.7.0~rc1+2, automatic), libjson-glib-1.0-common:amd64 (1.6.2-1, automatic), pve-kernel-5.13.19-2-pve:amd64 (5.13.19-4, automatic), libtpms0:amd64 (0.9.0+1, automatic), gnutls-bin:amd64 (3.7.1-5, automatic), libunbound8:amd64 (1.13.1-1, automatic), libjson-glib-1.0-0:amd64 (1.6.2-1, automatic), pve-kernel-5.13:amd64 (7.1-5, automatic), pve-kernel-5.11.22-7-pve:amd64 (5.11.22-12, automatic), libgnutls-dane0:amd64 (3.7.1-5, automatic)
Upgrade: librados2:amd64 (16.2.6-pve2, 16.2.7), pve-docs:amd64 (7.0-5, 7.1-2), libcurl4:amd64 (7.74.0-1.3+b1, 7.74.0-1.3+deb11u1), ceph-fuse:amd64 (16.2.6-pve2, 16.2.7), libcurl3-gnutls:amd64 (7.74.0-1.3+b1, 7.74.0-1.3+deb11u1), proxmox-widget-toolkit:amd64 (3.3-6, 3.4-4), libpve-rs-perl:amd64 (0.2.3, 0.4.4), corosync:amd64 (3.1.5-pve1, 3.1.5-pve2), pve-firmware:amd64 (3.3-2, 3.3-3), ceph-mgr-modules-core:amd64 (16.2.6-pve2, 16.2.7), zfs-zed:amd64 (2.0.5-pve1, 2.1.1-pve3), chrony:amd64 (4.0-8, 4.0-8+deb11u1), zfs-initramfs:amd64 (2.0.5-pve1, 2.1.1-pve3), spl:amd64 (2.0.5-pve1, 2.1.1-pve3), pve-qemu-kvm:amd64 (6.0.0-4, 6.1.0-3), libnvpair3linux:amd64 (2.0.5-pve1, 2.1.1-pve3), ceph-base:amd64 (16.2.6-pve2, 16.2.7), libpve-cluster-api-perl:amd64 (7.0-3, 7.1-2), xxd:amd64 (2:8.2.2434-3, 2:8.2.2434-3+deb11u1), python3-ceph-common:amd64 (16.2.6-pve2, 16.2.7), librbd1:amd64 (16.2.6-pve2, 16.2.7), lxcfs:amd64 (4.0.8-pve2, 4.0.11-pve1), librgw2:amd64 (16.2.6-pve2, 16.2.7), libuutil3linux:amd64 (2.0.5-pve1, 2.1.1-pve3), libpve-storage-perl:amd64 (7.0-12, 7.0-15), ceph-common:amd64 (16.2.6-pve2, 16.2.7), vim-common:amd64 (2:8.2.2434-3, 2:8.2.2434-3+deb11u1), libpve-guest-common-perl:amd64 (4.0-2, 4.0-3), libvotequorum8:amd64 (3.1.5-pve1, 3.1.5-pve2), libquorum5:amd64 (3.1.5-pve1, 3.1.5-pve2), pve-cluster:amd64 (7.0-3, 7.1-2), wget:amd64 (1.21-1+b1, 1.21-1+deb11u1), proxmox-ve:amd64 (7.0-2, 7.1-1), lxc-pve:amd64 (4.0.9-4, 4.0.11-1), libcmap4:amd64 (3.1.5-pve1, 3.1.5-pve2), ceph-mds:amd64 (16.2.6-pve2, 16.2.7), ceph-mgr:amd64 (16.2.6-pve2, 16.2.7), ceph-mon:amd64 (16.2.6-pve2, 16.2.7), ceph-osd:amd64 (16.2.6-pve2, 16.2.7), proxmox-backup-file-restore:amd64 (2.0.11-1, 2.1.2-1), python3-cephfs:amd64 (16.2.6-pve2, 16.2.7), libcfg7:amd64 (3.1.5-pve1, 3.1.5-pve2), libcephfs2:amd64 (16.2.6-pve2, 16.2.7), qemu-server:amd64 (7.0-14, 7.1-4), libpve-access-control:amd64 (7.0-5, 7.1-5), pve-container:amd64 (4.1-1, 4.1-3), libcpg4:amd64 (3.1.5-pve1, 3.1.5-pve2), vim-tiny:amd64 (2:8.2.2434-3, 2:8.2.2434-3+deb11u1), pve-i18n:amd64 (2.5-1, 2.6-2), base-files:amd64 (11.1+deb11u1, 11.1+deb11u2), libradosstriper1:amd64 (16.2.6-pve2, 16.2.7), proxmox-backup-client:amd64 (2.0.11-1, 2.1.2-1), libgmp10:amd64 (2:6.2.1+dfsg-1, 2:6.2.1+dfsg-1+deb11u1), distro-info-data:amd64 (0.51, 0.51+deb11u1), proxmox-mini-journalreader:amd64 (1.2-1, 1.3-1), python3-rbd:amd64 (16.2.6-pve2, 16.2.7), python3-rgw:amd64 (16.2.6-pve2, 16.2.7), libseccomp2:amd64 (2.5.1-1, 2.5.1-1+deb11u1), libpve-http-server-perl:amd64 (4.0-3, 4.0-4), pve-manager:amd64 (7.0-11, 7.1-8), libpve-common-perl:amd64 (7.0-10, 7.0-14), ceph:amd64 (16.2.6-pve2, 16.2.7), libjaeger:amd64 (16.2.6-pve2, 16.2.7), pve-kernel-5.11:amd64 (7.0-8, 7.0-10), libzfs4linux:amd64 (2.0.5-pve1, 2.1.1-pve3), curl:amd64 (7.74.0-1.3+b1, 7.74.0-1.3+deb11u1), pve-firewall:amd64 (4.2-4, 4.2-5), libcorosync-common4:amd64 (3.1.5-pve1, 3.1.5-pve2), libnozzle1:amd64 (1.22-pve1, 1.22-pve2), python3-ceph-argparse:amd64 (16.2.6-pve2, 16.2.7), libknet1:amd64 (1.22-pve1, 1.22-pve2), pve-edk2-firmware:amd64 (3.20200531-1, 3.20210831-2), pve-kernel-helper:amd64 (7.1-2, 7.1-6), zfsutils-linux:amd64 (2.0.5-pve1, 2.1.1-pve3), libpve-cluster-perl:amd64 (7.0-3, 7.1-2), python3-rados:amd64 (16.2.6-pve2, 16.2.7)
End-Date: 2021-12-29 09:09:09

***************************

After these... start problems ....
 
  • Like
Reactions: weehooey-bh
Hottest candidates are IMO the switch from pve-qemu-kvm 6.0.0-4 to 6.1.0-3 and going from the 5.11 kernel to 5.13.

I'm perfectly agree. As I said months ago in this thread, the problem is in QEMU, the kernel or both.

And now ?

It is too much of a gamble to roll back several clusters in production to a kernel and QEMU version from 8 months ago ... running on Proxmox 7.2-7 enterprise.

:)
 
***************************

After these... start problems ....

in fact, I went to look for my colleagues' emails, the first problems arose around the week of January 17-21, 2022, coinciding with the application of Microsoft updates and the VMs, 25 days after Proxmox was updated.
 
I have another cluster 6.4-1 running kernel pve-kernel-5.4.78-2-pve uptime 522 days, with no Problems. PVE 6.4-1 with running kernel 5.11 has problems.
 
I have another cluster 6.4-1 running kernel pve-kernel-5.4.78-2-pve uptime 522 days, with no Problems. PVE 6.4-1 with running kernel 5.11 has problems.

5.11. ?? What subversion ?

The 5.11.xxx on Proxmox 6.4 is newer than 5.11.22-7 on proxmox 7.0 that worked without the slightest problem?

Could it have been a backport problem that also affected Proxmox 6.4?
 
5.11. ?? What subversion ?

The 5.11.xxx on Proxmox 6.4 is newer than 5.11.22-7 on proxmox 7.0 that worked without the slightest problem?

Could it have been a backport problem that also affected Proxmox 6.4?
Linux pve6-01 5.11.22-5-pve #1 SMP PVE 5.11.22-10~bpo10+1 (Tue, 28 Sep 2021 10:30:51 +0200)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!