Windows VMs stuck on boot after Proxmox Upgrade to 7.0

Hello, I am also having the same problem. Everything worked fine until I upgraded to 7.4-3, and now my Windows Server 2019 does not get past the boot screen. I have tried rebooting the VM, the node, and the other node in the cluster, with still no luck. Any help would be appreciated.
What are the settings of your VM? Any "Extra CPU Flags" enabled?
 
Alright, it finally got past the boot screen, but it took 12 hours to get me to the login screen, and for sometime before that it was just a black screen. I do have some CPU flags enabled running on an AMD processor. The flag is hv-evmcs with ballooning turned off at 16GB. and the CPU is set to host. Below is the output of pending, because i can't remember what command to use to show the info.
cur balloon: 0
cur boot: order=ide0;net0
cur cores: 2
cur cpu: host,flags=-hv-evmcs
cur ide0: Applications:vm-105-disk-4,format=raw,size=100G,ssd=1
cur ide2: local:iso/17763.737.190906-2324.rs5_release_svc_refresh_SERVER_EVAL_x64FRE_en-us_1.iso,media=cdrom
cur memory: 16384
cur name: WindowsServer2019
cur net0: e1000=CE:DD:76:71:9E:F2,bridge=vmbr0,firewall=1
cur numa: 0
cur onboot: 1
cur ostype: win10
cur scsihw: virtio-scsi-pci
cur smbios1: uuid=5177249a-9b8a-4b55-ad04-c4e6bf9045d9
cur sockets: 2
cur spice_enhancements: foldersharing=1
cur startup: order=5
cur vmgenid: b562c4a8-9c00-4f8f-b1ed-7a7bb6267645
 
Alright, it finally got past the boot screen, but it took 12 hours to get me to the login screen, and for sometime before that it was just a black screen. I do have some CPU flags enabled running on an AMD processor. The flag is hv-evmcs with ballooning turned off at 16GB. and the CPU is set to host. Below is the output of pending, because i can't remember what command to use to show the info.
Disable hv-evmcs hv-tlbflush if enabled and you should be good again..... known issue in current 7.4-3 Kernel....
 
Hi,
cur cpu: host,flags=-hv-evmcs
the hv-evmcs flag is already disabled according to this, and the hv-tlbflush is not present either. You can use qm config <ID> to see the current config. How does the load on your system look like? How much CPU does the process for the VM use?

Disable hv-evmcs hv-tlbflush if enabled and you should be good again..... known issue in current 7.4-3 Kernel....
The issue with hv_tlbflush is fixed in current kernels, i.e. >= pve-kernel-5.15.107-1-pve and in the opt-in 6.2 kernel as well.
 
CPU usage is negligible, and it still takes a while to boot up. Here is the config.
Code:
balloon: 0
boot: order=ide0;net0;ide1
cores: 2
cpu: host,flags=-hv-tlbflush;-hv-evmcs
ide0: Applications:vm-105-disk-4,cache=writeback,discard=on,format=raw,size=100G,ssd=1
ide1: local:iso/virtio-win.iso,media=cdrom,size=522284K
ide2: local:iso/17763.737.190906-2324.rs5_release_svc_refresh_SERVER_EVAL_x64FRE_en-us_1.iso,media=cdrom
memory: 16384
name: WindowsServer2019
net0: e1000=CE:DD:76:71:9E:F2,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=5177249a-9b8a-4b55-ad04-c4e6bf9045d9
sockets: 2
spice_enhancements: foldersharing=1
startup: order=5
vmgenid: b562c4a8-9c00-4f8f-b1ed-7a7bb6267645
 
CPU usage is negligible, and it still takes a while to boot up. Here is the config.
How long approximately?

Code:
ide0: Applications:vm-105-disk-4,cache=writeback,discard=on,format=raw,size=100G,ssd=1
Can you share the storage configuration for Applications from /etc/pve/storage.cfg?

Not sure if related to your issue, but in general, using VirtIO rather then IDE or SATA is recommended, because of better performance and because the code is better maintained. But you need to install the relevant drivers for Windows first. And best to have a working backup first, before trying such changes!
 
Hello, sorry for the late response. Here is the config for applications, and also the CPU usage does not go above 3%. I have been trying to install the drivers, but installations are taking forever.
Code:
zfspool: Applications
        pool Applications
        content rootdir,images
        mountpoint /Applications
        nodes quantumcomputers
        sparse 0
 
OK update, I have installed the drivers, but they don't seem to be working. I have tried installing them multiple times, with no change to anything. It still takes a long time to do anything.
 
Hello, I am also having the same problem. Everything worked fine until I upgraded to 7.4-3, and now my Windows Server 2019 does not get past the boot screen. I have tried rebooting the VM, the node, and the other node in the cluster, with still no luck. Any help would be appreciated.
try booting with an older kernel or the opt-in 6.2 kernel
 
alright so after sometime to try and get it booted with a different kernel i have found 2 things:
1. i manually select the kernel using "proxmox-boot-tool kernel add" and it does not select the kernel i want to use even though it is there
2. after looking through this forum i have found out quite late that the ssds thati am using are not what i should be using for proxmox at all
 
My next question is if I wanted to clone my current ssds, then put in the new ones, would proxmox be able to view them? There are 4 of them in a raid z2. For reference, my drives are Samsung 870 qvo 1TB
 
Samsung QVO or QLC ssd are prohibited for zfs.
Perfs will be bad then they wearout too quickly.
 
Right, unfortunately i found out about that a bit too late. Do you know if I can clone my current SSds into new ones that would work, and still keep the zfs raid 2 on them, or do I just need to bite it and redo all my vms?
 
Hello, I'm having a similar problem on a server. Windows Server 2012 is installed on virtual servers. Servers are set to reboot themselves regularly through windows. But from time to time the servers stay on the boot screen. I'm using proxmox 7.4-3 via ovh. How can I solve this problem?

Code:
pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-3
pve-kernel-5.15.107-2-pve: 5.15.107-2
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.7.0
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-2
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

 
Hello, I'm having a similar problem on a server. Windows Server 2012 is installed on virtual servers. Servers are set to reboot themselves regularly through windows. But from time to time the servers stay on the boot screen. I'm using proxmox 7.4-3 via ovh. How can I solve this problem?

Code:
pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-3
pve-kernel-5.15.107-2-pve: 5.15.107-2
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.7.0
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-2
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1

I am experiencing something similar with a Windows 11 VM after upgrading to 8.x.

Were you able to find a solution?
 
I am experiencing something similar with a Windows 11 VM after upgrading to 8.x.

Were you able to find a solution?

The issue was a setting in QEMU (kernel and firmware were not involved) that didn't reset a variable and would result in an overflow after almost a month of running.
This problem only affected Windows, which would generate the spinning dot at reboot.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!