Blue screen with 5.1

cybermcm

Member
Aug 20, 2017
94
10
8
It seems that changing to 2003/XP avoids the bug. My system is up and running for7 hours, it never ran that long with the original 2016/10 setting. Is there any downside if I change the setting to 2003/XP? If not this can be a sufficient workaround for now...
 
Last edited:

aderumier

Member
May 14, 2013
203
18
18
It seems that changing to 2003/XP avoids the bug. My system is up and running for7 hours, it never ran that long with the original 2016/10 setting. Is there any downside if I change the setting to 2003/XP? If not this can be sufficient workaround for now...
It'll reduce the performance a little bit, but not too much. (virtio devices are still accelerated).

I'll try to apply patches on qemu tomorrow and build a deb. if you have time to test it, it could be great.
 

cybermcm

Member
Aug 20, 2017
94
10
8
some bad news: switching to 2003/XP seems to interfere with the boot process (at least on my host). Sometimes the VMs get stuck during boot. So I'll revert back to 4.10 and 10/2016 settings...
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,678
434
83

GadgetPig

Member
Apr 26, 2016
138
19
18
50
As far as BSOD, does it affect only Windows 10/2016 (using Win10/2016 VM setting)? I updated a Dell T300 server last night to latest pve-nosubscription with kernel 4.13.8 and rebooted. So far no BSOD on my Win2008SBS VM (Vista/2008 VM setting) on LVM/Latest Stable VirtIO drivers.
 
Jun 8, 2016
221
41
28
43
Johannesburg, South Africa
We have a small cluster running 3 x HP ProLiant DL380 G7 systems. Systems are almost identical but one has a slightly different CPU. Two have dual Intel Xeon X5650 CPUs and one has dual Intel Xeon E5506 CPUs.

Everything had been stable for about a year, even after upgrading PVE 4.4 Ceph Jewel to PVE 5.1 Ceph Luminous.

Last night we changed the 7 guest VMs from VirtIO and default (kvm64) CPU to VirtSCSI (discard=on) and set CPU features to Nehalem. We upgraded netkvm and installed balloon, vioserial and vioscsi drivers v0.1.126; these are confirmed to be stable on 200+ Windows guests in other clusters. Qemu guest agent was also installed, which wasn't previously running on these guests.

VMs started crashing almost immediately after making these changes. Installing and booting 4.10.17-5-pve restored stability.

VMs are set with fixed memory, herewith a sample definition (Windows 2012r2 guests):
Code:
agent: 1
boot: cdn
bootdisk: scsi0
cores: 2
cpu: Nehalem
ide2: none,media=cdrom
localtime: 1
memory: 4096
name: test-vip
net0: virtio=00:16:3e:5f:00:05,bridge=vmbr0
numa: 1
onboot: 1
ostype: win8
protection: 1
scsi0: virtuals:vm-101-disk-1,discard=on,size=80G
scsihw: virtio-scsi-pci
smbios1: uuid=3a04b7a5-084a-4c2e-81b3-7675e5327b48
sockets: 1
startup: order=2
vga: cirrus
Have upgraded one of the 3 nodes to pve-kernel-4.13.8-2-pve_4.13.8-28_amd64.deb, 6+ hours so far and no problems... ;)
 
Jan 29, 2017
109
5
18
43
I updated the kernel but on bootup (autostart) my windows 10 machines hang (3 of them) doing a reset on each of them everything works and is stable till now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!