Windows VM issues when CPU set to Host

Shadow Sysop

Member
Mar 7, 2021
53
3
13
41
I'm having an odd occurrence. When creating a WIndows VM, if I set the cpu to Host, the installation will proceed normally but then at the end will result in an infinite bootloop when trying to restart after install. If I use KVM64 or Qemu64 this does not occur. I've noticed if I complete the install the OS as KVM64, I can power down the VM then change the CPU to Host, then boot and it seems to work. Is this normal behavior? I'm far from an expert, but I wonder if running the VM cpu as host (since the hypervisor a Debian Linux based system) that maybe there is a kernel issue (just guessing)? Does anyone have any thoughts?
 
Using host should normally work. What version of Proxmox VE are you running exactly?
Code:
pveversion -v
 
I'm getting KMode exception blue screens when trying to use host on Windows VM.

proxmox-ve: 6.4-1 (running kernel: 5.4.119-1-pve)
pve-manager: not correctly installed (running version: 6.4-9/5f5c0e3f)
pve-kernel-5.4: 6.4-3
pve-kernel-helper: 6.4-3
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.10-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-6
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 
Hi, I'd suggest upgrading your PVE version first! This should also fix broken packages
pve-manager: not correctly installed (running version: 6.4-9/5f5c0e3f)
Use
Code:
apt update
apt full-upgrade
you can also post the output so that we can check if the PVE repositories are correctly configured
 
by applicable law.
root@server1:~# apt update
Hit:1 http://security.debian.org buster/updates InRelease
Hit:2 http://ftp.us.debian.org/debian buster InRelease
Get:3 http://ftp.us.debian.org/debian buster-updates InRelease [51.9 kB]
Hit:5 http://download.proxmox.com/debian buster InRelease
Ign:4 https://linux.dell.com/repo/community/debian jessie InRelease
Hit:6 https://linux.dell.com/repo/community/debian jessie Release
Fetched 51.9 kB in 1s (49.5 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
4 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@server1:~# apt full-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages were automatically installed and are no longer required:
libirs-export161 libisccfg-export163 policycoreutils selinux-utils
Use 'apt autoremove' to remove them.
The following NEW packages will be installed:
pve-kernel-5.4.124-1-pve
The following packages will be upgraded:
libproxmox-backup-qemu0 proxmox-widget-toolkit pve-kernel-5.4 pve-kernel-helper
4 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 62.4 MB of archives.
After this operation, 289 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://download.proxmox.com/debian buster/pve-no-subscription amd64 proxmox-widget-toolkit all 2.6-1 [79.8 kB]
Get:2 http://download.proxmox.com/debian buster/pve-no-subscription amd64 libproxmox-backup-qemu0 amd64 1.1.0-1 [1,703 kB]
Get:3 http://download.proxmox.com/debian buster/pve-no-subscription amd64 pve-kernel-5.4.124-1-pve amd64 5.4.124-1 [60.7 MB]
Get:4 http://download.proxmox.com/debian buster/pve-no-subscription amd64 pve-kernel-5.4 all 6.4-4 [3,956 B]
Get:5 http://download.proxmox.com/debian buster/pve-no-subscription amd64 pve-kernel-helper all 6.4-4 [11.3 kB]
Fetched 62.4 MB in 7s (8,794 kB/s)
Reading changelogs... Done
(Reading database ... 81340 files and directories currently installed.)
Preparing to unpack .../proxmox-widget-toolkit_2.6-1_all.deb ...
Unpacking proxmox-widget-toolkit (2.6-1) over (2.5-6) ...
Preparing to unpack .../libproxmox-backup-qemu0_1.1.0-1_amd64.deb ...
Unpacking libproxmox-backup-qemu0 (1.1.0-1) over (1.0.3-1) ...
Selecting previously unselected package pve-kernel-5.4.124-1-pve.
Preparing to unpack .../pve-kernel-5.4.124-1-pve_5.4.124-1_amd64.deb ...
Unpacking pve-kernel-5.4.124-1-pve (5.4.124-1) ...
Preparing to unpack .../pve-kernel-5.4_6.4-4_all.deb ...
Unpacking pve-kernel-5.4 (6.4-4) over (6.4-3) ...
Preparing to unpack .../pve-kernel-helper_6.4-4_all.deb ...
Unpacking pve-kernel-helper (6.4-4) over (6.4-3) ...
Setting up proxmox-widget-toolkit (2.6-1) ...
Setting up pve-kernel-helper (6.4-4) ...
Setting up pve-manager (6.4-9) ...
Setting up pve-kernel-5.4.124-1-pve (5.4.124-1) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.4.124-1-pve /boot/vmlinuz-5.4.124-1-pve
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 5.4.124-1-pve /boot/vmlinuz-5.4.124-1-pve
update-initramfs: Generating /boot/initrd.img-5.4.124-1-pve
run-parts: executing /etc/kernel/postinst.d/proxmox-auto-removal 5.4.124-1-pve /boot/vmlinuz-5.4.124-1-pve
run-parts: executing /etc/kernel/postinst.d/zz-proxmox-boot 5.4.124-1-pve /boot/vmlinuz-5.4.124-1-pve
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 5.4.124-1-pve /boot/vmlinuz-5.4.124-1-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.124-1-pve
Found initrd image: /boot/initrd.img-5.4.124-1-pve
Found linux image: /boot/vmlinuz-5.4.119-1-pve
Found initrd image: /boot/initrd.img-5.4.119-1-pve
Found linux image: /boot/vmlinuz-5.4.114-1-pve
Found initrd image: /boot/initrd.img-5.4.114-1-pve
Found linux image: /boot/vmlinuz-5.4.73-1-pve
Found initrd image: /boot/initrd.img-5.4.73-1-pve
Found linux image: /boot/vmlinuz-4.19.0-17-amd64
Found initrd image: /boot/initrd.img-4.19.0-17-amd64
Found linux image: /boot/vmlinuz-4.19.0-16-amd64
Found initrd image: /boot/initrd.img-4.19.0-16-amd64
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
done
Setting up pve-kernel-5.4 (6.4-4) ...
Setting up libproxmox-backup-qemu0 (1.1.0-1) ...
Processing triggers for libc-bin (2.28-10) ...
root@server1:~#
 
Thanks, repository and upgrades should be OK. What you can try to do about
that maybe there is a kernel issue (just guessing)? Does anyone have any thoughts?
is installing a more recent kernel (5.11 is shipped with PVE)
Code:
apt install pve-kernel-5.11
and reboot.
The CPU type determines which CPU flags are passed to your guest. host means all flags from the host, whereas the others provide a reduced set of flags. So we could try some other CPU types as well. Which CPU do you have?
Code:
lscpu
 
Last edited:
Dual CPUs, Xeon 5690s in a poweredge r710 with h700 raid card. . Also to note, this only becomes an issue with a fresh windows installation. After the installation is complete, it reboots into endless fatal errors (kmode exceptions). If the install is done, with say, KVM64 as the processor, then switched to host mode, it seems to boot up fine and I've not yet come across any issues. I'm just concerned it's maybe indicative of a bigger issue, as I am able to reproduce the issue by just creating a VM with CPU set to host in Windows.

"is installing a more recent kernel (5.11 is shipped with PVE)"

Wouldn't my kernel be the most recent already, since I keep my PVE updated?
 
Last edited:
In PVE 6
  • 5.4 is default and
  • 5.11 is opt-in
In PVE 7
  • 5.11 is default
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.4

Are your other VM settings the default settings? Because for me the installation worked both with CPU=host (Xeon(R) CPU E3-1231) and CPU=Westmere
 
Last edited:
In PVE 6
  • 5.4 is default and
  • 5.11 is opt-in
In PVE 7
  • 5.11 is default
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.4

Are your other VM settings the default settings? Because for me the installation worked both with CPU=host (Xeon(R) CPU E3-1231) and CPU=Westmere

agent: 1
boot: order=virtio0;ide2;net0
cipassword: XXXXXXX
ciuser: Windows10
cores: 4
cpu: host
ide0: local-lvm2:vm-1011-cloudinit,media=cdrom
ide2: none,media=cdrom
ipconfig0: gw=47.22.XXX.XXX,ip=47.22.XXX.XXX/27
machine: pc-i440fx-5.2
memory: 4096
name: 210-win10
net0: virtio=A6:FC:5C:82:C3:13,bridge=vmbr0
numa: 1
ostype: win10
scsihw: virtio-scsi-pci
serial1: socket
smbios1: uuid=63e5c74f-285c-48d1-8fdb-2c10148c755b
sockets: 1
virtio0: local-lvm2:vm-1011-disk-0,size=50G
vmgenid: 1bbc7c87-7cc9-46f9-b8e3-bXXXXXXXXX


I'm looking at upgrading from PVE 6 to 7, but concerned about stopping VMs for a prolonged period of time. Obviously installing the 5.11 kernel would call for a restart, also resulting in downtime. I could migrate VMs to a different node and do 1 node at a time i guess.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!