Windows guest bluescreen with Proxmox 6

cybermcm

Well-Known Member
Aug 20, 2017
101
11
58
Upgraded my (really old) lab to Proxmox 6.0-4.
Now I have a bluescreen with my Windows server 2019 VM (SYSTEM THREAD EXCEPTION NOT HANDLED).

Played around a bit with 2019 install media, also bluescreen during setup of a fresh VM.
Changed CPU to core2duo -> I was able to install -> changed CPU back to kvm64 -> bootloop (automatic repair Windows with no success in the end, no error message, can't repair)...
Changing CPU back -> no success, still bootloop, why?

Any ideas what might help (going back to Proxmox 5.4 is an Option, I know ;-))?
Any expert out there saving my lab?

root@host01:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 38 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 23
Model name: Intel(R) Xeon(R) CPU E5440 @ 2.83GHz
Stepping: 6
CPU MHz: 2826.815
BogoMIPS: 5666.95
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 6144K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm pti tpr_shadow vnmi flexpriority dtherm

BTW: Linux Debian is running, host is OK...
 
I have the same problem, I installed a new server with Proxmox 6.0-4 today, but can't install windows server 2019,
during the install process of windows server 2019, it always bluescreen after the "Loading Files” :(

Proxmox 5.4-1 has no this issue.
 
1. old HP g5 server, CPU info is in my first post
2.
root@host01:~# qm config 103
agent: 1
boot: cdn
bootdisk: ide0
cores: 1
cpu: core2duo
ide0: h01R5:vm-103-disk-0,discard=on,size=20G
ide2: none,media=cdrom
memory: 1024
name: SRV03
net0: virtio=FA:C2:42:1C:93:F3,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win10
smbios1: uuid=392ad384-86ae-4d91-8142-3b409bf9cd8a
sockets: 1
unused0: h01R5:vm-103-disk-1
vmgenid: bb496aa6-d3aa-4dc9-93be-3b7a926908df
 
Works here.
  1. what physical CPU do you run?
  2. post your: "qm config VMID"

1. The cpu is Intel Xeon X5450

2.
bootdisk: virtio0
cores: 4
ide2: local:iso/SW_DVD9_Win_Server_STD_CORE_2019_1809.1_64Bit_ChnSimp_DC_STD_MLF_X22-02966.ISO,media=cdrom
memory: 4096
name: T-2019-KVM
numa: 0
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=68213f5e-c71e-40a0-a968-8c7f82c9f147
sockets: 1
virtio0: local-lvm:vm-100-disk-0,size=40G
vmgenid: 1405796f-f821-4879-8516-f0ee06be1634
 
Similar problem on old test host.
Windows 2008R2 and Windows 2016 no longer start after update to 6.x. But the situation is different. Infinitely (more than 3 hours) hanging boot screen. Changing the type and flags of processor does not change the situation. Сhanging the disk type, controller and cache method is also not change the situation.

Code:
root@test:~# qm config 100
balloon: 0
boot: c
bootdisk: sata0
cores: 2
ide2: none,media=cdrom
memory: 2048
name: w2008
net0: e1000=6A:24:2B:3B:C5:84,bridge=vmbr0,firewall=1
numa: 0
ostype: win7
sata0: local-lvm:vm-100-disk-0,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=4cb63a34-c320-4608-849d-c33ac3d1f0c2
sockets: 1
vmgenid: 0e666a7f-e8b8-4898-851c-ece6dba740c5

root@test:~# lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       36 bits physical, 48 bits virtual
CPU(s):              2
On-line CPU(s) list: 0,1
Thread(s) per core:  1
Core(s) per socket:  2
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               23
Model name:          Pentium(R) Dual-Core  CPU      E6600  @ 3.06GHz
Stepping:            10
CPU MHz:             2400.027
CPU max MHz:         2534,0000
CPU min MHz:         1600,0000
BogoMIPS:            6133.40
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            2048K
NUMA node0 CPU(s):   0,1
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm xsave lahf_lm pti tpr_shadow vnmi flexpriority dtherm
 
Setting the OS Type to "Vista/2008" (thus disabling hv-tlbflush) could be a useful workaround for now. If that fixes it for you, let us know, as we are currently investigating this issue. It seems like it has something to do with running on older hardware especially.
 
I can confirm, switching to Vista/2008 helps with a HP Proliant G5 and with an old PC converted to a home lab server
root@host04:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 36 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 23
Model name: Intel(R) Core(TM)2 Quad CPU Q9300 @ 2.50GHz
Stepping: 7
CPU MHz: 2362.218
CPU max MHz: 2499.0000
CPU min MHz: 2003.0000
BogoMIPS: 5008.76
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 3072K
NUMA node0 CPU(s): 0-3
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm pti tpr_shadow vnmi flexpriority dtherm
 
Code:
Start-Date: 2019-07-29  19:54:52
Commandline: apt full-upgrade
Install: pve-kernel-5.0.18-1-pve:amd64 (5.0.18-1, automatic)
Upgrade: pve-kernel-5.0:amd64 (6.0-5, 6.0-6), libpve-storage-perl:amd64 (6.0-5, 6.0-6), pve-firewall:amd64 (4.0-5, 4.0-6), pve-container:amd64 (3.0-4, 3.0-5), pve-manager:amd64 (6.0-4, 6.0-5), libpve-common-perl:amd64 (6.0-2, 6.0-3), qemu-server:amd64 (6.0-5, 6.0-7), pve-kernel-helper:amd64 (6.0-5, 6.0-6), patch:amd64 (2.7.6-3, 2.7.6-3+deb10u1)
End-Date: 2019-07-29  19:56:02

Everything is back to normal. For Windows 7, Windows 2008R2 and Windows 2016 option "OS Type" return in accordance with the installed OS. "Extra CPU Flags" left by default. Thank.

Code:
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       36 bits physical, 48 bits virtual
CPU(s):              2
On-line CPU(s) list: 0,1
Thread(s) per core:  1
Core(s) per socket:  2
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               23
Model name:          Pentium(R) Dual-Core  CPU      E6600  @ 3.06GHz
Stepping:            10
CPU MHz:             2367.737
CPU max MHz:         2534,0000
CPU min MHz:         1600,0000
BogoMIPS:            6133.10
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            2048K
NUMA node0 CPU(s):   0,1
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm xsave lahf_lm pti tpr_shadow vnmi flexpriority dtherm
 
I can confirm working solution with pve-kernel-5.0.18-1-pve
root@host04:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 36 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 23
Model name: Intel(R) Core(TM)2 Quad CPU Q9300 @ 2.50GHz
Stepping: 7
CPU MHz: 2008.477
CPU max MHz: 2499.0000
CPU min MHz: 2003.0000
BogoMIPS: 5009.50
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 3072K
NUMA node0 CPU(s): 0-3
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm pti tpr_shadow vnmi flexpriority dtherm
and
root@host01:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 38 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 23
Model name: Intel(R) Xeon(R) CPU E5440 @ 2.83GHz
Stepping: 6
CPU MHz: 2831.296
BogoMIPS: 5666.33
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 6144K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm pti tpr_shadow vnmi flexpriority dtherm
all settings back to normal with Windows 2019
 
Glad to hear the update fixed it!

Just for future reference: It seems the hv-tlbflush flag is silently incompatible with CPUs supporting neither EPT nor VPID (the kernel assumes at least one of these technologies is available). We made the flag optional for now, you can still enable it in the advanced CPU settings for your VMs if you know what you're doing.
 
I've recently upgraded to Proxmox 6, and applied latest updates, and having this issue. Running a 3 node cluster on some old HP DL360 G5 servers with an old SAN exporting a NFS share for storage of guests.

Windows 2008R2 servers won't boot unless I switch the OS type to Vista/2008. Tried with OS Type set to 7/2008r2 and just hv-tlbflush disabled and with both hv-tlbflush and hv-evmcs disabled. Guest will either bluescreen with a few different stop codes, or just hang while booting. Also tried with ballooning disabled.

pve-kernel-5.0.18-1-pve doesn't seem to be available in the main enterprise repo, only test, so still running:
Code:
# uname -a
Linux vmserver4 5.0.15-1-pve #1 SMP PVE 5.0.15-1 (Wed, 03 Jul 2019 10:51:57 +0200) x86_64 GNU/Linux

Is 5.0.18 kernel required, in order to be able to set correct OS Type to 7/2008R2? If so, can I install just the kernel from the pve-test repo, and disregard the other packages that are listed to be updated when I enable the test repo?

Code:
# qm config 101
agent: 1
balloon: 2048
boot: cdn
bootdisk: scsi0
cores: 2
cpu: kvm64
ide2: none,media=cdrom
memory: 3072
name: spiceworks
net0: virtio=32:33:63:61:64:65,bridge=vmbr0
numa: 0
onboot: 1
ostype: w2k8
scsi0: nfs1:101/vm-101-disk-1.qcow2,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=25c851fe-358d-4e2c-90c5-dc1993dcd118
sockets: 1
vga: std

Code:
# lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       38 bits physical, 48 bits virtual
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  1
Core(s) per socket:  4
Socket(s):           2
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               23
Model name:          Intel(R) Xeon(R) CPU           E5440  @ 2.83GHz
Stepping:            6
CPU MHz:             2000.090
BogoMIPS:            5666.86
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            6144K
NUMA node0 CPU(s):   0-7
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm pti tpr_shadow vnmi flexpriority dtherm
 
I've recently upgraded to Proxmox 6, and applied latest updates, and having this issue. Running a 3 node cluster on some old HP DL360 G5 servers with an old SAN exporting a NFS share for storage of guests.

Windows 2008R2 servers won't boot unless I switch the OS type to Vista/2008. Tried with OS Type set to 7/2008r2 and just hv-tlbflush disabled and with both hv-tlbflush and hv-evmcs disabled. Guest will either bluescreen with a few different stop codes, or just hang while booting. Also tried with ballooning disabled.


Is 5.0.18 kernel required, in order to be able to set correct OS Type to 7/2008R2? If so, can I install just the kernel from the pve-test repo, and disregard the other packages that are listed to be updated when I enable the test repo?

Code:
# qm config 101
agent: 1
balloon: 2048
boot: cdn
bootdisk: scsi0
cores: 2
cpu: kvm64
ide2: none,media=cdrom
memory: 3072
name: spiceworks
net0: virtio=32:33:63:61:64:65,bridge=vmbr0
numa: 0
onboot: 1
ostype: w2k8
scsi0: nfs1:101/vm-101-disk-1.qcow2,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=25c851fe-358d-4e2c-90c5-dc1993dcd118
sockets: 1
vga: std

Code:
# lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       38 bits physical, 48 bits virtual
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  1
Core(s) per socket:  4
Socket(s):           2
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               23
Model name:          Intel(R) Xeon(R) CPU           E5440  @ 2.83GHz
Stepping:            6
CPU MHz:             2000.090
BogoMIPS:            5666.86
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            6144K
NUMA node0 CPU(s):   0-7
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm pti tpr_shadow vnmi flexpriority dtherm

This is a bug with a new windows hyperv enlightments and old cpu. This has been fixed in last proxmox updates . (I think in pvetest, not sure in pve-no-subscription).
This is in package qemu-server , not related to kernel.
 
Thanks for the reply, spirt. I've applied latest pve-qemu-kvm 4.0.0-3 from enterprise repo, which didn't resolve the issue. Is 4.0.0-5 available in pve-test required to resolve this?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!