I'm running a small lab at home (old PC hardware, no special server hardware). Until yesterday I used version 5.0, today I did an inplace upgrade to version 5.1. Since version 5.1 my server 2016 VMs are getting blue screens (server 2016 core and 2016 with GUI). With 5.0 everything ran stable.
Blue screen is caused by ntoskrnl.exe bugcheck code 0x00000109.
The host itself seems to run stable;
Guest systems have virtio-win-0.1.141 drivers installed.
Can you try to boot the previous kernel in the GRUB boot menu? Should be an 4.10 while PVE 5.1 uses one in version 4.13
If that solves it there may be a regression in a kernel module, maybe KVM.
Did you had the 5.0 also updated or on an older state, e.g. the one from the ISO installer? Trying to rule things out here.
I did apply updates regularly, I was on 4.10 before the upgrade. It was stable with 4.10.
Is there probably a log file which helps to track down the issue?
Hmm, look into the journal (journalctl) or dmesg if you see anything resembling a kernel error or stack trace from around the time where the VM bluescreens.
Else, it could also be a bad coincidence and a memory or storage (hardware) problem...
I didn't find anything in the journalctl log, my dmesg is attached, I'm not really sure if there is a problem visible. Maybe you can take a look at it. If this doesn't help I'll revert back to the old kernel to see if the system is stable again
Did memory testing today, seems fine. SMART values from the harddisks OK ->
I'm now trying the old kernel (Linux host04 4.10.17-3-pve #1 SMP PVE 4.10.17-23 (Tue, 19 Sep 2017 09:43:50 +0200) x86_64 GNU/Linux) again via advanced GRUB startup. Let's see if it is stable again
Update: Stable again with the old kernel (no blue screen during the night). Is there anything on my side I can do to track down the issue?
another question: how can I modify grub to start 4.10? currently 4.13 is starting which isn't useful at the moment.
I tried
GRUB_DEFAULT=saved
GRUB_SAVEDEFAULT=true
and update-grub but this doesn't work.
Same situation here on a Lab Installation. A Mix of Debian, Centos VM's and one Windows 10 VM. while the Linux VM's run stable, the Win 10 VM get regular CRITICAL_STRUCTURE_CORRUPTION Blue Screen Error (0x00000109) after a few hours in operation. Before running PVE 4 and upgraded to PVE 5. the blue screen started shortly after starting the VM.
The VM's run on a SSD, the PVE 5.1 on regular HDD.
The Windows VM was running on a older virtio-win driver. Upgrading to virtio-win-0.1.141 drivers did not helped.
Important detail:
the unstabile Situation Happens while Running the VM on a single SSD. After I moved the VM to a regular HHD within the PVE5.1 Node everthing was stable as expected.
My case: default kvm64.
I'll change to host and tell the results.
I'm now checking with one VM, and the boot time is very fast now.
I'll post the results.
"
Limited CPU support for Windows 10 and Windows Server 2016 guests
On a Red Hat Enterprise 6 host, Windows 10 and Windows Server 2016 guests can only be created when using the following CPU models:
* the Intel Xeon E series
* the Intel Xeon E7 family
* Intel Xeon v2, v3, and v4
* Opteron G2, G3, G4, G5, and G6
For these CPU models, also make sure to set the CPU model of the guest to match the CPU model detected by running the "virsh capabilities" command on the host. Using the application default or hypervisor default prevents the guests from booting properly.
To be able to use Windows 10 guests on Legacy Intel Core 2 processors (also known as Penryn) or Intel Xeon 55xx and 75xx processor families (also known as Nehalem), add the following flag to the Domain XML file, with either Penryn or Nehalem as MODELNAME:
Other CPU models are not supported, and both Windows 10 guests and Windows Server 2016 guests created on them are likely to become unresponsive during the boot process.
"
My case: default kvm64.
I'll change to host and tell the results.
I'm now checking with one VM, and the boot time is very fast now.
I'll post the results.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.