Blue screen with 5.1 how can I do that?
root@host04:~# sysctl -w scsi_mod.use_blk_mq=n
sysctl: cannot stat /proc/sys/scsi_mod/use_blk_mq: No such file or directory
@canove: Did you downgrade? How do you manage to start with the old kernel? I've to intercept during booting and change kernel in GRUB. I didn't find a possibility yet to keep this persistent for now...
edit /etc/default/grub
put scsi_mod.use_blk_mq=n at the end of the kernel line
sorry for my bad Linux knowledge, my grub file attached, there is no kernel line
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'

GRUB_DISTRIBUTOR="Proxmox Virtual Environment"

# Disable os-prober, it might add menu entries for each guest

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)

# Uncomment to disable graphical terminal (grub-pc only)

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux

# Disable generation of recovery mode menu entries

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"
Where to put it?
Thank you, I'll try and report back. I tried to read about this option but it is above my technical knowledge. Can you give a short summary what happens if I use this option?
Crashes here too. Fresh install of Poxmox 5.1.
  • Windows 2008 R2 (mostly freeze, but today I got a DRIVER_IRQL_NOT_LESS_OR_EQUAL blue screen)
  • Windows 10, always blue screen.
All drivers are virtio 0.1.141. I changed NIC driver on Windows 2008 R2 to Intel, and the blue screen is since then. Before, it was only freeze.
Sometimes it takes days to crash. A few hours uptime is also possible. (3 crashes on thursday).
CPU isXeon Silver 4114. intel-microcode does not contain newer microcode. use_blk_mq was already 0.

The Windows 10 was running on a Proxmox 5.0 before without issues, but also on an older CPU. After upgrade to 5.1 on that same CPU the blue screens started. Moving to the new host makes no difference in stability. how can I do that?
root@host04:~# sysctl -w scsi_mod.use_blk_mq=n
sysctl: cannot stat /proc/sys/scsi_mod/use_blk_mq: No such file or directory
@canove: Did you downgrade? How do you manage to start with the old kernel? I've to intercept during booting and change kernel in GRUB. I didn't find a possibility yet to keep this persistent for now...

I installed the old kernel and change the grub config, like this:

1) Find the $menuentry_id_option for the submenu:
grep submenu /boot/grub/grub.cfg
submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-41da66c6-6e16-476f-bd6c-0df8515acf73' {

2) Find the $menuentry_id_option for the menu entry for the kernel you want to use:
grep gnulinux /boot/grub/grub.cfg
... menuentry 'Proxmox Virtual Environment GNU/Linux, with Linux 4.10.17-4-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.10.17-4-pve-advanced-41da66c6-6e16-476f-bd6c-0df8515acf73' {...

3) Comment out your current default grub in /etc/default/grub and replace it with the sub-menu's $menuentry_id_option from step one, and the selected kernel's $menuentry_id_option from step two separated by ">".

In my case the modified GRUB_DEFAULT is:

4) Update grub to make the changes. For Debian this is done like so:
sudo update-grub
Done. Now when you boot, the advanced menu should have an asterisk and you should boot into the selected kernel. You can confirm this with uname.
uname -a

Last edited:

I've downgraded the kernel version, but I've had problems with booting the qm VMs. It look like the old kernel maps the zvol different, as /dev/zvol is now missing. So I'm not bale to boot the Windows VM.

I'm running prxmox 5.1 and kernel was: 4.13.4-1-pve. I've tried to down grade to 4.10.17-4-pve.

Is there any other fix I can try?

Or is there a proper way to downgrade to proxmox 5.0 ?


There is no easy way to downgrade to 5.0, as there are changes in the way ZFS works. So downgrading is not a option. We have to wait for a fix, which is really bad!
Last edited:
@canove: Thank you very much for the GRUB instructions, I'm now able to default boot the old kernel (where everything works)! I'm still interested if this bug can be solved, I think wolfgang from proxmox is still looking into it...
In the last days I migrated a server with Proxmox 4.4 to 5.1, there are 2 Windows VMs:
- an old Windows XP with IDE disk driver and LSI controller
- a Windows 8.1 with VirtIO controller and VirtIO disk drivers
I backed them up, installed the server from scratch with PVE 5.1 installer (ZFS raid 10 on 4x2TB SATA HDDs) and restored the backups. The Windows XP VM was starting without problems, the Windows 8.1 VM crashed with BSOD and CRITICAL_STRUCTURE_CORRUPTION error.
I changed the boot disk from VirtIO to SCSI but still was having problems. So I changed boot disk to SATA, added a temp disk with SCSI and the VM started. From within Windows 8.1 VM I updated the controller driver (not the disk driver, the controller driver) to Virtio 0.1.141, shutdown, changed back the boot disk to SCSI and voilà, VM is working without problems.

I have to add that this server have an Intel I3-4160, I installed the intel-microcode package and the VM have host as CPU type.
Last edited:

thank you for your feedback, but I was not able to downgrade to older kernel as all the ZVOL are missing and not mapped into /dev/zvol. That makes my vm not start. Even 4.13.3-1-pve wont map the ZVOL.

Please read the posted thread we build a new kernel 4.10 with zfs 0.7.3.
So you should be able to start again.
  • Like
Reactions: vankooch
Ah great didn't see that. I've successfully downgraded to 4.10.17-5-pve, and my machines do boot!
Thank you so far.... I'll report back if we still have BSOD.
@mbaldini: currently it seems that there are 2 issues. The "easy" issue can be fixed by upgrading the VirtIO drivers to the last stable version.
The "difficult" issue seems to be related to some CPU models. wolfgang from the proxmox team is currently looking into it

Ok, works so far, no BSOD anymore! VM's are running one day without any problems.
Thank you for bringing up a workaround!
Last edited:
dpkg -i pve-kernel-4.10.17-5-pve_4.10.17-25_amd64.deb


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!