[SOLVED] Windows 11 VM Starting / Backup issues -- "failed: got timeout"

cooma

Member
Oct 6, 2021
4
0
6
47
Received an alert that one of my backups failed for a Windows 11 VM, which was in a stopped state at the time of the backup. Backup error was "failed: got timeout". Tried starting the VM and was getting the same error. I can resolve the issue short-term but it keeps coming back. Any ideas on root cause or ways to apply a permanent fix?

Steps to correct the error:
1) Made sure VM was stopped
2) issued the command: qm set 109 --lock suspended

After issuing the qm set command the VM starts successfully. I also manually kicked off the backup and it was also successful. But, after stopping the VM, I get the same error when trying to start it again, which requires me to issue the qm set command once again.

Any ideas as to what might be causing this issue?

Error when starting VM.
swtpm_setup: Not overwriting existing state file.
TASK ERROR: start failed: command '/usr/bin/kvm -id 109 -name WindowsMain -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/109.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/109.pid -daemonize -smbios 'type=1,uuid=8df658b2-74c2-4486-83bd-a9b56ad6f2d4' -drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,size=131072,file=/dev/zvol/Local-Proxmox-1TB/vm-109-disk-2' -smp '8,sockets=1,cores=8,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/109.vnc,password=on' -no-hpet -cpu 'host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vendor_id=proxmox,hv_vpindex,kvm=off,+kvm_pv_eoi,+kvm_pv_unhalt' -m 32000 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'vmgenid,guid=e3612ca5-1389-4e9d-ad38-8e6824752e05' -device 'vfio-pci,host=0000:00:02.0,id=hostpci0,bus=pci.0,addr=0x10' -device 'ich9-intel-hda,id=audiodev0,bus=pci.2,addr=0xc' -device 'hda-micro,id=audiodev0-codec0,bus=audiodev0.0,cad=0,audiodev=spice-backend0' -device 'hda-duplex,id=audiodev0-codec1,bus=audiodev0.0,cad=1,audiodev=spice-backend0' -audiodev 'spice,id=spice-backend0' -chardev 'socket,id=tpmchar,path=/var/run/qemu-server/109.swtpm' -tpmdev 'emulator,id=tpmdev,chardev=tpmchar' -device 'tpm-tis,tpmdev=tpmdev' -device 'qxl-vga,id=vga,vgamem_mb=56,ram_size_mb=224,vram_size_mb=112,bus=pcie.0,addr=0x1' -chardev 'socket,path=/var/run/qemu-server/109.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -spice 'tls-port=61000,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:3b4f2e79fbdd' -drive 'file=/mnt/pve/Network-Proxmox-Data/template/iso/Files.iso,if=none,id=drive-ide0,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=101' -drive 'file=/mnt/pve/Network-Proxmox-Data/template/iso/virtio-win-0.1.208.iso,if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=102' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/zvol/Local-Proxmox-1TB/vm-109-disk-0,if=none,id=drive-scsi0,cache=writeback,discard=on,format=raw,aio=io_uring,detect-zeroes=unmap' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap109i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=86:6A:DD:21:C6:25,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=103' -rtc 'driftfix=slew,base=localtime' -machine 'type=pc-q35-5.1+pve0' -global 'kvm-pit.lost_tick_policy=discard'' failed: got timeout
 
Last edited:
So I was able to track down the root cause being a PCI Device I recently added to the VM. Also, not sure if related, but I forgot I updated to 7.0-14+1 last night. Everything had been working just fine prior to this update.

Several days ago I added a PCI Device Passthrough on this particular VM. A GPU. If I remove the PCI Device, the VM starts every single time. Adding it back results in the "failed: got timeout" error.

Also, to point out, if I add the PCI Device back and issue the qm set --lock suspended command, the VM will start just fine, but only once. It requires issuing the qm set command each time for a successful start of the VM. No clue why the suspend command will cause it to start properly.

Adding some additional info below.

qm config results for this VM:
agent: 1
audio0: device=ich9-intel-hda,driver=spice
balloon: 0
bios: ovmf
boot: order=scsi0;ide0;ide2;net0
cores: 8
cpu: host
description: args%3A -tpmdev passthrough,id=tpm-tpm0,path=/dev/tpm0,cancel-path=/dev/null -device tpm-tis,tpmdev=tpm-tpm0,id=tpm0
efidisk0: Local-Proxmox-1TB:vm-109-disk-2,size=1M
hostpci0: 0000:00:02.0,x-vga=1
ide0: Network-Proxmox-Data:iso/Files.iso,media=cdrom,size=81044K
ide2: Network-Proxmox-Data:iso/virtio-win-0.1.208.iso,media=cdrom,size=543390K
machine: pc-q35-5.1
memory: 32768
name: WindowsMain
net0: virtio=86:6A:DD:21:C6:25,bridge=vmbr1,firewall=1
numa: 0
ostype: win10
scsi0: Local-Proxmox-1TB:vm-109-disk-0,cache=writeback,discard=on,size=256G
scsihw: virtio-scsi-pci
smbios1: uuid=8df658b2-74c2-4486-83bd-a9b56ad6f2d5
sockets: 1
tpmstate0: Local-Proxmox-4TB:vm-109-disk-1,size=4M,version=v2.0
vga: qxl,memory=56
vmgenid: e3612ca5-1389-4e9d-ad38-8e6824752e06


pveversion output:
proxmox-ve: 7.0-2 (running kernel: 5.11.22-7-pve)
pve-manager: 7.0-14+1 (running version: 7.0-14+1/08975a4c)
pve-kernel-helper: 7.1-4
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.11.22-3-pve: 5.11.22-7
pve-kernel-5.11.22-2-pve: 5.11.22-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-12
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-13
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.13-1
proxmox-backup-file-restore: 2.0.13-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.1-1
pve-docs: 7.0-5
pve-edk2-firmware: 3.20210831-1
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.1.0-1
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-18
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
 
issue is resolved after latest updates -- 7.1-4.

latest pveversion output:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-4 (running version: 7.1-4/ca457116)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.13.19-1-pve: 5.13.19-2
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.11.22-3-pve: 5.11.22-7
pve-kernel-5.11.22-2-pve: 5.11.22-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.14-1
proxmox-backup-file-restore: 2.0.14-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.4-2
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-1
pve-qemu-kvm: 6.1.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-3
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!