Windows VM rebooting (bugcheck)

sargue

Active Member
Mar 17, 2019
16
1
43
48
I have a couple Windows 10 VMs on my host with very different configurations.

Both see frequent (as in daily, twice a day) reboots. The event log shows this:

> The computer has rebooted from a bugcheck. The bugcheck was: 0x0000003b (0x00000000c0000005, 0xfffff8000a8c4ce3, 0xffff848f31524a40, 0x0000000000000000). A dump was saved in: C:\WINDOWS\MEMORY.DMP. Report Id: 34f4e739-31a2-446c-b6ab-997ef65308c5.

My host details:

CPU(s)

4 x Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz (1 Socket)
Kernel Version

Linux 5.0.21-1-pve #1 SMP PVE 5.0.21-2 (Wed, 28 Aug 2019 15:12:18 +0200)
PVE Manager Version

pve-manager/6.0-7/28984024

I've seen other threads about this same issue in different moments of time and seems that some kernels fix it. I guess this is a new problem as I have the latest upgrades.

Any suggestions?
 
I have a couple Windows 10 VMs on my host with very different configurations.

Both see frequent (as in daily, twice a day) reboots. The event log shows this:

> The computer has rebooted from a bugcheck. The bugcheck was: 0x0000003b (0x00000000c0000005, 0xfffff8000a8c4ce3, 0xffff848f31524a40, 0x0000000000000000). A dump was saved in: C:\WINDOWS\MEMORY.DMP. Report Id: 34f4e739-31a2-446c-b6ab-997ef65308c5.

My host details:

CPU(s)

4 x Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz (1 Socket)
Kernel Version

Linux 5.0.21-1-pve #1 SMP PVE 5.0.21-2 (Wed, 28 Aug 2019 15:12:18 +0200)
PVE Manager Version

pve-manager/6.0-7/28984024

I've seen other threads about this same issue in different moments of time and seems that some kernels fix it. I guess this is a new problem as I have the latest upgrades.

Any suggestions?

We did not encounter such a problem up to now - does it occur in Debian stretch hosts too?
 
Just whether the problems occurs in a Proxmox VE 5 environment too.

Right, sorry. I don't think so but I'm not completely sure when exactly it started happening because both VMs are running services that start automatically so a restart is not always detected right away.
 
I have been having the same issues lately on a Windows 10 KVM. Started noticing the machine had rebooted overnight a few weeks ago, then it progressively got worse to the point it would reboot every few hours. Always the same bugcheck error that OP posted. Everything 100% up to date on the Proxmox side, latest Windows 10 updates (which is probably the issue). From Google searches, it sounds like a driver issue so I updated to the latest Virtio drivers. I first updated to the latest stable version (0.1.171), then the latest version (0.1.172), and had the same issues.

I finally re-installed Windows using older Virtio drivers (0.1.160 - just to try it) and thought I fixed it. I hadn't had a reboot in a few days. Then this morning I noticed the machine rebooted overnight. Here is the Event Viewer entry:

The computer has rebooted from a bugcheck. The bugcheck was: 0x0000003b (0x00000000c0000005, 0xfffff80341845bf2, 0xfffff500e9a3dc80, 0x0000000000000000). A dump was saved in: C:\Windows\MEMORY.DMP. Report Id: <ID HERE>

Am running on a Dell T30 with 40 GB of RAM. Proxmox and my VMs/Containers run on a ZFS mirror pool. Let me know what other details to post to help troubleshoot.

EDIT: Adding a few additional details. I also run a container with Plex (w/ GPU passthrough), a container that acts as a file server, and a Debian KVM with Docker. None of these have had any issues, reboots, or any problems in any way. The host machine runs just fine and no errors noted in storage or elsewhere.
 
Last edited:
do you have by any chance balloing activated and setr different values for min and max?
 
My Windows10 VM rebooted again tonight while checking for Windows Updates (don't know if its related). It was a different bugcheck this time.

The computer has rebooted from a bugcheck. The bugcheck was: 0x0000001e (0xffffffffc0000005, 0xfffff8042a60022d, 0x0000000000000000, 0x00000c2be9d5fcc6). A dump was saved in: C:\Windows\MEMORY.DMP. Report Id: xxx.

EDIT: No issues in any other containers or VMs. Only Windows.
 
  • This Stop error describes a KMODE_EXCEPTION_NOT_HANDLED issue.
  • The parameters in this Stop error message vary, depending on the configuration of the computer.
  • Not all "STOP: 0x0000001E" errors are caused by this issue.
Cause

This issue occurs because of an NTFS file system memory leak issue. Specifically, when an application opens a file that has an oplock on it for modification in a transaction, NTFS will break the oplock and will leak nonpaged pool memory. This causes excessive memory usage and memory allocation failures.
 
so whats the vhost storage? did you check the drives?
what driver/storage do you use? virtio?

i would recommend trying to switch to scsi, todo that you need to activate that driver first. you can activate it via regestry or (for one machine easier) via the following:

Code:
simple trick,
-add one scsi harddrive with 1gb to the machine while it runs.
-shutdown vm
-detach that new scsi drive and delete it (so youre not getting confused)
-detach all other disks
-readd em as scsi devices (if you use ssds you can also tag discard and ssd emu)
-set boot option of the guest to boot from scsi disk

done windows will now boot with the correct driver and use kvm virtual disk as scsi device

also check disk for errors.

also download spice geust tools and use spice. they also bring a bunch of drivers. maybe reinstalling will help there.
what guest os did you define in your guest options in promox? did you set the right windows version?


as far windows goes promox is running fine with win2012r2. i have a lot of promox host running all over the place with 2012r2 guests.
in one cluster i run now over 80 2012r2 guests without an issue. so its not promox or kvm
 
so whats the vhost storage? did you check the drives?
what driver/storage do you use? virtio?

i would recommend trying to switch to scsi, todo that you need to activate that driver first. you can activate it via regestry or (for one machine easier) via the following:

Code:
simple trick,
-add one scsi harddrive with 1gb to the machine while it runs.
-shutdown vm
-detach that new scsi drive and delete it (so youre not getting confused)
-detach all other disks
-readd em as scsi devices (if you use ssds you can also tag discard and ssd emu)
-set boot option of the guest to boot from scsi disk

done windows will now boot with the correct driver and use kvm virtual disk as scsi device

also check disk for errors.

also download spice geust tools and use spice. they also bring a bunch of drivers. maybe reinstalling will help there.
what guest os did you define in your guest options in promox? did you set the right windows version?


as far windows goes promox is running fine with win2012r2. i have a lot of promox host running all over the place with 2012r2 guests.
in one cluster i run now over 80 2012r2 guests without an issue. so its not promox or kvm


OP and I are having issues with Windows 10, not Windows 2012. Very different beasts.

For the Windows 10 VM, I'm using Virtio SCSI. Specified Windows for the guest OS type. Yes, correct Windows version.

I've read all the Microsoft pages on those bugcheck codes and everything seems to point to a driver issue. The fact that someone else is having the same issue all of a sudden leads me to believe its related to a recent Windows update and the Virtio drivers. I've tried full reinstalls of Windows with different versions of Virtio drivers and am seeing the same results. I may have to try and find an older Windows 10 ISO and not give the test VM a network card (to prevent any updates on install). Curious if that might have different results.

One of the bugcheck codes does seem to point to memory or storage issue, but I'm confident this isn't the case. I ran memtester the other day and it found no issues with my RAM. I've also run a bunch of hard drive checks with 0 issues. SMART shows everything is healthy. Everything is ZFS and Proxmox doesn't show any errors in storage. All other containers and VMs work just fine, just not Windows 10 as of the last few weeks.
 
use the 2012r2 drivers then, they should work on 10
also 10 isnt that different from 2012r2 at all. its the same new codebase based on windows 8.

im pretty confident that this isnt really windows the issue but the drivers.
which drivers exactly do you use? beta or stable?
 
I am also having this problem with a Windows 10 Pro VM on Proxmox 6 since a month or so. Before that and on Proxmox 5.4 everything run perfectly fine (I upgraded to Proxmox 6 and it worked there well for some time too). I get bluescreen every few days it seems.

I already tried:

- setting the CPU from "host" to "KVM64"
- changing the network adapter from "virtio" to "e1000" (one bluescreen complained about ndis.sys)
- setting the OS to "Vista/Server2008"

I have a Server 2019 VM on the same system and it works perfectly.

I also have a seperate cluster with three nodes still running Proxmox 5.4 and all the Windows 10 VMs run perfectly well on them with the same set of Virtio drivers and QXL/Spice drivers.
 
Seems that the latest kernel fixes it: 5.0.21-2-pve

At least since its installation two days ago no more reboots.
 
Seems that the latest kernel fixes it: 5.0.21-2-pve

At least since its installation two days ago no more reboots.

I just updated my machine last night and it wasn't restarted this morning when I logged in. I hope this is true.
 
Can confirm that the latest kernel fixed this. No reboots for awhile now. Thought I did the other day but it was just a windows update.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!