Issues with Windows 10 VM

Tahsin

Well-Known Member
Mar 24, 2018
49
5
48
35
I am having issues with my Windows VM.

root@pve:~# pveversion -v
proxmox-ve: 5.1-41 (running kernel: 4.13.13-6-pve)
pve-manager: 5.1-46 (running version: 5.1-46/ae8241d4)
pve-kernel-4.13.13-6-pve: 4.13.13-41
pve-kernel-4.13.13-5-pve: 4.13.13-38
corosync: 2.4.2-pve3
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-common-perl: 5.0-28
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-17
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 2.1.1-3
lxcfs: 2.0.8-2
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-11
pve-cluster: 5.0-20
pve-container: 2.0-19
pve-docs: 5.1-16
pve-firewall: 3.0-5
pve-firmware: 2.0-3
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.9.1-9
pve-xtermjs: 1.0-2
qemu-server: 5.0-22
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.6-pve1~bpo9

The VM settings as follows:

root@pve:~# cat /etc/pve/qemu-server/105.conf
agent: 1
args: -device vfio-pci,host=00:02.0,addr=0x18,x-igd-opregion=on
balloon: 0
bios: ovmf
boot: cd
bootdisk: virtio0
cores: 3
cpu: host
efidisk0: local-zfs:vm-105-disk-2,size=128K
hotplug: usb
ide2: none,media=cdrom
memory: 3200
name: Win10-Test
net0: virtio=D6:72:CE:CC:00:7B,bridge=vmbr0,queues=2
numa: 0
ostype: win10
scsihw: virtio-scsi-single
smbios1: uuid=f6ec9974-49ed-428c-ad76-115b5ea49c8c
sockets: 1
vga: std
virtio0: local-zfs:vm-105-disk-1,iothread=1,size=64G

The VM also freezes a lot while startup and shut down. On start up sometimes it will freeze on "Start boot options" but on restart it will definitely freeze on that screen. On shutdown it may never shutdown. I am using all latest VirtIO drivers. After it freezes i cannot stop the VM. It will say "TASK ERROR: can't lock file '/var/lock/qemu-server/lock-105.conf' - got timeout". Sometimes I have to shutdown the entire host to get it unstuck.

Is there a place where i can see the logs to see why it is getting stuck?

My end goal is to have graphic card pass-through but i cannot get the windows to even work properly.
 
anything in the syslog of the host or the guest?
 
the syslog tab in the node for example or simply on the commandline: journalctl

e.g. for the log since the last boot: journalctl -b
 
I ran journalctl -b and just gave the past few logs while the KVM VM was in bootloop state.

Code:
Apr 10 21:37:00 pve systemd[1]: Starting Proxmox VE replication runner...
Apr 10 21:37:01 pve systemd[1]: Started Proxmox VE replication runner.
Apr 10 21:38:00 pve systemd[1]: Starting Proxmox VE replication runner...
Apr 10 21:38:01 pve systemd[1]: Started Proxmox VE replication runner.
Apr 10 21:39:00 pve systemd[1]: Starting Proxmox VE replication runner...
Apr 10 21:39:01 pve systemd[1]: Started Proxmox VE replication runner.
Apr 10 21:39:03 pve audit[8803]: AVC apparmor="DENIED" operation="file_lock" profile="lxc-container-default-cgns" pid=8803 comm="(ionclean)" family="unix" sock_type="dgram" protocol=0 addr=none
Apr 10 21:39:03 pve audit[8803]: AVC apparmor="DENIED" operation="file_lock" profile="lxc-container-default-cgns" pid=8803 comm="(ionclean)" family="unix" sock_type="dgram" protocol=0 addr=none
Apr 10 21:39:03 pve kernel: audit: type=1400 audit(1523414343.199:6845): apparmor="DENIED" operation="file_lock" profile="lxc-container-default-cgns" pid=8803 comm="(ionclean)" family="unix" sock_type="dgram" protocol=0 addr=none
Apr 10 21:39:03 pve kernel: audit: type=1400 audit(1523414343.199:6846): apparmor="DENIED" operation="file_lock" profile="lxc-container-default-cgns" pid=8803 comm="(ionclean)" family="unix" sock_type="dgram" protocol=0 addr=none
Apr 10 21:39:03 pve kernel: audit: type=1400 audit(1523414343.199:6847): apparmor="DENIED" operation="file_lock" profile="lxc-container-default-cgns" pid=8803 comm="(ionclean)" family="unix" sock_type="dgram" protocol=0 addr=none
Apr 10 21:39:03 pve audit[8803]: AVC apparmor="DENIED" operation="file_lock" profile="lxc-container-default-cgns" pid=8803 comm="(ionclean)" family="unix" sock_type="dgram" protocol=0 addr=none
Apr 10 21:39:03 pve audit[8803]: AVC apparmor="DENIED" operation="file_lock" profile="lxc-container-default-cgns" pid=8803 comm="(ionclean)" family="unix" sock_type="dgram" protocol=0 addr=none
Apr 10 21:39:03 pve kernel: audit: type=1400 audit(1523414343.199:6848): apparmor="DENIED" operation="file_lock" profile="lxc-container-default-cgns" pid=8803 comm="(ionclean)" family="unix" sock_type="dgram" protocol=0 addr=none
Apr 10 21:40:00 pve systemd[1]: Starting Proxmox VE replication runner...
Apr 10 21:40:01 pve systemd[1]: Started Proxmox VE replication runner.

The guest just boot loops SeaBios: "Proxmox logo" then "starting from hard drive" and it will do that indifiniely. In UEFI mode: It will get stuck at "Start Boot Options" 75% way through.
 
The guest just boot loops SeaBios: "Proxmox logo" then "starting from hard drive" and it will do that indifiniely. In UEFI mode: It will get stuck at "Start Boot Options" 75% way through.
how did you install the vm? you cannot simply switch between uefi and seabios without reconfiguring (or reinstalling) the guest os
 
so you are saying that a new vm with the config from the original post (minust the -args part) does not boot correctly?
virtio drivers are installed? windows iso verified?
 
Correct. I used the latest and stable VirtIO drivers. Windows has no yellow exclamation on any drivers. Windows ISO was downloaded directly from Microsoft and verified. It installs just fine in ESXi using the same hardware.
 
I think i have the same issue, not sure.

Only with windows 10 (all builts) and windows 2016, clean ISO from official site, all others windows works.
Tested with virtio 0.1.129, 0.1.141 and 0.1.149, in seabios and uefi mode, with and without pci passthrough.

Result in same: the VM boots correctly, and if i reboot it (from web interface or start menu), it stuck on Proxmox spashscreen. I can enter in bios in uefi mode, manualy trie to boot from EFI/boot/boot64.efi but il stuck in black sceen with cursor. Only stop and restart the VM solve this.

I upgraded the kernel to 4.15 (pve-no-subscription) and qemu from pve-test, issue persist.

BUT, all of this append ONLY with host cpu (and windows 10)
If i select Westmere, qemu64 or kvm64 => it works.

I suppose CPU is not reset ?

No logs in journalctl -b.

I have a Pentium G4600T, 8Go ECC and Asus P10S-I

edit:

root@hypercromat:/home/cromat# cat /etc/pve/qemu-server/111.conf
agent: 1
bios: ovmf
boot: c
bootdisk: scsi0
cores: 4
cpu: host
efidisk0: local:111/vm-111-disk-2.qcow2,size=128K
ide2: none,media=cdrom
machine: q35
memory: 4096
name: Win-10
net0: virtio=A6:44:9E:84:AE:AD,bridge=vmbr0
numa: 0
ostype: win10
scsi0: local:111/vm-111-disk-1.qcow2,cache=none,discard=on,size=40G
scsihw: virtio-scsi-pci
smbios1: uuid=cdd80cde-0fab-4212-8282-5b723ccad749
sockets: 1
vga: qxl

root@hypercromat:/home/cromat# pveversion -v
proxmox-ve: 5.1-42 (running kernel: 4.15.15-1-pve)
pve-manager: 5.1-49 (running version: 5.1-49/1e427a54)
pve-kernel-4.13: 5.1-44
pve-kernel-4.15: 5.1-3
pve-kernel-4.15.15-1-pve: 4.15.15-6
pve-kernel-4.13.16-2-pve: 4.13.16-47
corosync: 2.4.2-pve3
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-18
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-14
pve-cluster: 5.0-24
pve-container: 2.0-21
pve-docs: 5.1-17
pve-firewall: 3.0-7
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-2
qemu-server: 5.0-24
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3

edit2

daemon.log, after start the VM, and restart from start menu:
root@hypercromat:/home/cromat# truncate -s 0 /var/log/daemon.log
root@hypercromat:/home/cromat# cat /var/log/daemon.log | grep 111
root@hypercromat:/home/cromat# cat /var/log/daemon.log | grep 111
Apr 12 13:09:22 hypercromat pvedaemon[12837]: start VM 111: UPID:hypercromat:00003225:01068EE0:5ACF3E62:qmstart:111:cromat@pam:
Apr 12 13:09:22 hypercromat systemd[1]: Started 111.scope.
Apr 12 13:09:22 hypercromat systemd-udevd[12861]: Could not generate persistent MAC address for tap111i0: No such file or directory

other info: CPU stuck at 100% of 1 core (25%).
 
Last edited:
I think i have the same issue, not sure.

Only with windows 10 (all builts) and windows 2016, clean ISO from official site, all others windows works.
Tested with virtio 0.1.129, 0.1.141 and 0.1.149, in seabios and uefi mode, with and without pci passthrough.

Result in same: the VM boots correctly, and if i reboot it (from web interface or start menu), it stuck on Proxmox spashscreen. I can enter in bios in uefi mode, manualy trie to boot from EFI/boot/boot64.efi but il stuck in black sceen with cursor. Only stop and restart the VM solve this.

I upgraded the kernel to 4.15 (pve-no-subscription) and qemu from pve-test, issue persist.

BUT, all of this append ONLY with host cpu (and windows 10)
If i select Westmere, qemu64 or kvm64 => it works.

I suppose CPU is not reset ?

No logs in journalctl -b.

I have a Pentium G4600T, 8Go ECC and Asus P10S-I

edit:

root@hypercromat:/home/cromat# cat /etc/pve/qemu-server/111.conf
agent: 1
bios: ovmf
boot: c
bootdisk: scsi0
cores: 4
cpu: host
efidisk0: local:111/vm-111-disk-2.qcow2,size=128K
ide2: none,media=cdrom
machine: q35
memory: 4096
name: Win-10
net0: virtio=A6:44:9E:84:AE:AD,bridge=vmbr0
numa: 0
ostype: win10
scsi0: local:111/vm-111-disk-1.qcow2,cache=none,discard=on,size=40G
scsihw: virtio-scsi-pci
smbios1: uuid=cdd80cde-0fab-4212-8282-5b723ccad749
sockets: 1
vga: qxl

root@hypercromat:/home/cromat# pveversion -v
proxmox-ve: 5.1-42 (running kernel: 4.15.15-1-pve)
pve-manager: 5.1-49 (running version: 5.1-49/1e427a54)
pve-kernel-4.13: 5.1-44
pve-kernel-4.15: 5.1-3
pve-kernel-4.15.15-1-pve: 4.15.15-6
pve-kernel-4.13.16-2-pve: 4.13.16-47
corosync: 2.4.2-pve3
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-18
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-14
pve-cluster: 5.0-24
pve-container: 2.0-21
pve-docs: 5.1-17
pve-firewall: 3.0-7
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-2
qemu-server: 5.0-24
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3

edit2

daemon.log, after start the VM, and restart from start menu:
root@hypercromat:/home/cromat# truncate -s 0 /var/log/daemon.log
root@hypercromat:/home/cromat# cat /var/log/daemon.log | grep 111
root@hypercromat:/home/cromat# cat /var/log/daemon.log | grep 111
Apr 12 13:09:22 hypercromat pvedaemon[12837]: start VM 111: UPID:hypercromat:00003225:01068EE0:5ACF3E62:qmstart:111:cromat@pam:
Apr 12 13:09:22 hypercromat systemd[1]: Started 111.scope.
Apr 12 13:09:22 hypercromat systemd-udevd[12861]: Could not generate persistent MAC address for tap111i0: No such file or directory

other info: CPU stuck at 100% of 1 core (25%).


It is the exact problem I am having. I changed the CPU as you mentioned to kvm64 and it reboots just fine. I am currently running the latest version of proxmox as of today. I need the host option since I need some flags that kvm64 doesn't have.
 
honestly i cannot reproduce that with that config.... works without problems here..
what cpu do you have?
 
I have the exact same issue running proxmox on an UP Squared (Intel CPU N4200).
Also if i change the CPU from Host to KVM64, the issue is gone. So it really seems it depends on the Host-CPU for the bug to be present.
 
Update !!

It's really the CPU the problem !

I changed today my Pentium G4600T (Kabylake) to a Xeon E3-1230V6 (Kabylake), and now Windows 10 and Server 2016 VM success reboot with host CPU.
 
  • Like
Reactions: Pourya Mehdinejad
I have the same problem with a W2016 guest. It keeps booting forever with 1 vcore 100% and consuming only 100MB of RAM. After 1 hour running, only windows splash is shown. I have a cluster of 3 nodes without subscription and Virtual Environment 5.3-12. This guest was installed without no issues many month ago, but now is impossible to boot. The host is a 40 x Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz (2 Sockets). The guest has OS type marked as W10.

The most curious is when I move the guest to another host of the cluster with 24 x Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz (2 Sockets), it boots flawlessly!!

Any way to patch this? Is a kernel issue? Windows patches? Spectre/Meltdown patches?

Thanks in advance

PD: I can report more info if needed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!