Ver 7.1-5 after update VM xxx qmp command 'set_password' failed - unable to connect to VM xxx qmp socket - timeout after 31 retries

orlando

Member
Oct 4, 2020
3
0
6
48
Hello Please need some help after updateing to Ver 7.1-5 i no longer can get windows vm to boot when doing so i am getting ((
VM 104 qmp command 'set_password' failed - unable to connect to VM 104 qmp socket - timeout after 31 retries
TASK ERROR: Failed to run vncproxy. ))
I have rebooted the server and still getting the same problem is there a quick fix for this ? or is there a way to downgrade proxmox back to the earlier version so i can get my VM's back working.
 
There are workarounds, please post:

> pveversion -v

and your VM setting:

> qm config 104
 
Hi Tom please find below the information you asked for I am having this problem with 3 more of my QM's 105 / 106 Mac Big sur and 109 a other windows VM my linux VM unaffected and working fine.
If you would like to see the other Vm's please let me know

Did >pveversion please see below

proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-5 (running version: 7.1-5/6fe299a0)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-5
pve-kernel-5.13.19-1-pve: 5.13.19-2
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 15.2.15-pve1
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.14-1
proxmox-backup-file-restore: 2.0.14-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.4-2
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-1
pve-qemu-kvm: 6.1.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-3
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3


And my qm config 104 below

root@sv:~# qm config 104
agent: 0
bootdisk: sata0
cores: 4
ide2: ISO-Storage:iso/Win10_1909_English_x64.iso,media=cdrom
machine: pc-q35-6.0
memory: 8192
name: Windows-Printer
net0: virtio=D2:01:67:C1:FA:0F,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: win10
sata0: Local-Proxmox:104/vm-104-disk-0.raw,discard=on,size=42G
scsihw: virtio-scsi-pci
smbios1: uuid=9b8053a1-b113-4413-b23f-4f5cd79db152
sockets: 1
vga: vmware
vmgenid: a240a115-f4a4-43fd-a260-35c409b8c07b
 
sata0: Local-Proxmox:104/vm-104-disk-0.raw,discard=on,size=42G
This is the problem, please go to the Hardware Tab and open the virtual disk, click advanced, and change the "Async IO" to "Threads" or "Native".

Generally, we never recommend SATA for windows. Better use VirtIO SCSI (drivers needed).
 
  • Like
Reactions: carlosericel
Hi Tom

Thank you very much that worked your a life saver will remember in regards to we never recommend SATA for windows. Better use VirtIO SCSI.
 
Hi,

I am also facing this error, trying your fix by changing the "Async IO" to "Threads" or "Native". will update once it is resolved.
 
Hi, i had the same problem. how long should to conversion take ? i now wait about 30 min for 60gb disk.
Thanks
Marco
 
Hi,
Hi, i had the same problem. how long should to conversion take ? i now wait about 30 min for 60gb disk.
Thanks
Marco
you need to shutdown+start (can also be done via the Reboot button in the UI) to apply a change to the Async IO setting. It's still recommended to use the SCSI controller for VMs, but I think the original issue from this thread (which was about SATA+io_uring) should not be in newer kernels anymore.
 
()
trying to acquire lock...
TASK ERROR: can't lock file '/var/lock/qemu-server/lock-104.conf' - got timeout
 
Hi,
()
trying to acquire lock...
TASK ERROR: can't lock file '/var/lock/qemu-server/lock-104.conf' - got timeout
this usually indicates that another operation already holds the lock. You can check with fuser -vau /var/lock/qemu-server/lock-104.conf to find the process. You need to wait for the other operation to finish or cancel it.
 
I got a similar issue but doesn't seem to be related to virtual disk:
pveversion -v
proxmox-ve: 8.0.1 (running kernel: 6.2.16-3-pve)
pve-manager: 8.0.3 (running version: 8.0.3/bbf3993334bfa916)
pve-kernel-6.2: 8.0.2
pve-kernel-5.15: 7.4-4
pve-kernel-6.2.16-3-pve: 6.2.16-3
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.107-1-pve: 5.15.107-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.5
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 2.99.0-1
proxmox-backup-file-restore: 2.99.0-1
proxmox-kernel-helper: 8.0.2
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.5
pve-cluster: 8.0.1
pve-container: 5.0.3
pve-docs: 8.0.3
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.2
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.4
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1



qm config 105
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;ide2
cores: 4
cpu: host
efidisk0: local-zfs:vm-105-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:45:11.3,rombar=0
hostpci1: 0000:81:00,rombar=0
ide0: local:iso/virtio-win-0.1.229.iso,media=cdrom,size=522284K
ide2: local:iso/en-us_windows_10_enterprise_ltsc_2021_x64_dvd_d289cf96.iso,media=cdrom,size=4784630K
machine: pc-q35-8.0
memory: 32764
meta: creation-qemu=8.0.2,ctime=1688648159
name: WinBlueIris
numa: 0
ostype: win11
scsi0: local-zfs:vm-105-disk-1,discard=on,size=128G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=218cd1ba-7f01-4edf-8825-341e0ab67970
sockets: 1
tpmstate0: local-zfs:vm-105-disk-2,size=4M,version=v2.0
vcpus: 4
vmgenid: 7137f3d6-6997-439a-aca3-e3b9943b45a4
 
ok. some updates: it's fixed.
Somehow my proxmox host was using one of the passthrough pci device.

If this can help someone:
Remove one by one passthrough pcie with a shurtdown - start sequence (not a reboot. It wont apply the hardware change).
Once you know which pci adress is beeing used (your VM willl load without issue once you remove the affected shared device).

on the host, do a lspci -kn (find the related vfio adress. it's formated as 10de:10f8).
add this vfio aderss in: /etc/modprobe.d/vfio.conf:
options vfio-pci ids=10de:1eb1,10de:10f8,10de:1ad8,10de:1ad9 disable_vga=1

My shared pci was a GPU so I've activated the disable_vga. I'm on a headless setup.

then:
update-initramfs -u -k all

reboot
Voila.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!