ERROR: PBS backups are not supported by the running QEMU version. Please make sure you've installed the latest version and the VM has been restarted.

I'm having the same problem ...

I have just updated to the latests Proxmox release and some of my virtual machines give this error when I try to backup them using PBS:
PBS backups are not supported by the running QEMU version. Please make sure you've installed the latest version and the VM has been restarted.

During the upgrade I migrated online the virtual machines to other nodes for avoid stoping them.
Should I stop and restart again these virtual machines for solve the problem?
 
Can you post the output of

Bash:
pveversion -v
# replace VMID below with a relevant ID of a VM which should work
qm status VMID --verbose|grep -A5 proxmox-support
 
I have been testing the backup of the virtual machines with problems and now it is going fine.
Maybe the solution has come because I have had to stop and start them again?
 
Well, in in may case I deactivated the qemu agent inside the vm. Then it was working for one or two backups.
The next time the pbs backup got interrupted again and the vm again was not accessible !
Now I deactivated the qemu agent support in the vm configuration, now it seems that the backup is working.
I will wait until tomorrow.

There must be some relations between the pbs backup and the qemu agent (FS thawing) , that lead to this behaviour.
 
I have been testing the backup of the virtual machines with problems and now it is going fine.
Maybe the solution has come because I have had to stop and start them again?
Please, can you actually post the requested output so that we can try to actually help?
 
Yes, of course:
Code:
# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph: 14.2.16-pve1
ceph-fuse: 14.2.16-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

Bash:
# qm status 112 --verbose|grep -A5 proxmox-support
proxmox-support:
    pbs-library-version: 1.0.2 (18d5b98ab1bec4004178a0db6f2debb83bfa9165)
    pbs-dirty-bitmap: 1
    pbs-dirty-bitmap-migration: 1
    query-bitmap-info: 1
qmpstatus: running
 
i'm getting up this old thread.
I have a cluster with 3 nodes.
I do a automatic backup of all VM's every night.

On one cluster i can backup the first VM and then every other VM failed.

Code:
VMID NAME STATUS TIME SIZE FILENAME
xxx OK 00:11:21 50.00GB vm/107/2021-03-07T23:00:03Z
xxx FAILED 00:01:25 VM 301 qmp command 'backup' failed - got timeout
xxx FAILED 00:02:28 VM 302 qmp command 'backup' failed - got timeout
xxx FAILED 00:03:18 VM 303 qmp command 'backup' failed - got timeout
xxx FAILED 00:02:40 VM 304 qmp command 'backup' failed - got timeout
xxx FAILED 00:10:07 PBS backups are not supported by the running QEMU version. Please make sure you've installed the latest version and the VM has been restarted.
xxxFAILED 00:10:07 PBS backups are not supported by the running QEMU version. Please make sure you've installed the latest version and the VM has been restarted.
xxx FAILED 00:10:07 PBS backups are not supported by the running QEMU version. Please make sure you've installed the latest version and the VM has been restarted.
xxx FAILED 00:03:10 VM 308 qmp command 'backup' failed - got timeout
xxx FAILED 00:00:27 VM 309 qmp command 'backup' failed - backup connect failed: command error: http request timed out
xxx FAILED 00:07:06 VM 310 qmp command 'backup' failed - got timeout
xxx FAILED 00:00:18 VM 311 qmp command 'backup' failed - backup connect failed: command error: channel closed
xxx FAILED 00:00:21 VM 801 qmp command 'backup' failed - backup connect failed: command error: http request timed out

All VM's are different Windows versiosn with installed QEMU agent.

Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.101-1-pve)
pve-manager: 6.3-4 (running version: 6.3-4/0a38c56f)
pve-kernel-5.4: 6.3-6
pve-kernel-helper: 6.3-6
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-4.15: 5.4-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph: 14.2.16-pve1
ceph-fuse: 14.2.16-pve1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-5
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.2.0-2
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.3-pve2

Code:
qm status 107 --verbose|grep -A5 proxmox-support
proxmox-support:
        query-bitmap-info: 1
        pbs-dirty-bitmap-migration: 1
        pbs-dirty-bitmap: 1
        pbs-masterkey: 1
        pbs-library-version: 1.0.3 (8de935110ed4cab743f6c9437357057f9f9f08ea)

On the other nodes there is also the info qmpstatus: running
Code:
qm status 802 --verbose|grep -A5 proxmox-support
proxmox-support:
        pbs-dirty-bitmap: 1
        pbs-library-version: 1.0.2 (18d5b98ab1bec4004178a0db6f2debb83bfa9165)
        pbs-dirty-bitmap-migration: 1
        query-bitmap-info: 1
qmpstatus: running

If i start the backup manually from the Proxmox server it works finde...

Tonight i will try to backup the VM's with stop mode, not snapshot mode.
Or should ich deinstall the QEMU agent?
I have installed it to show the IP in the Proxmox server
 
tonight i tried to make the backups in stop mode.

some worked some didn't with different problems.
a few VM's are shown as started in Proxmox after the backup, but are not reachable over console or ping.

Here is a backuplog from a host which is not reachable after backup (in this case Windows Server 2019, bt the problem is with different OS)

Code:
2021-03-09 00:00:02 INFO: Starting Backup of VM 107 (qemu)
2021-03-09 00:00:02 INFO: status = running
2021-03-09 00:00:02 INFO: backup mode: stop
2021-03-09 00:00:02 INFO: ionice priority: 7
2021-03-09 00:00:02 INFO: VM Name: xxx
2021-03-09 00:00:02 INFO: include disk 'virtio1' 'sandata2:vm-107-disk-0' 50G
2021-03-09 00:00:03 INFO: stopping vm
2021-03-09 00:00:12 INFO: creating Proxmox Backup Server archive 'vm/107/2021-03-08T23:00:02Z'
2021-03-09 00:00:12 INFO: starting kvm to execute backup task
2021-03-09 00:00:41 INFO: started backup task 'a9fa044e-e1e7-4613-a2e0-2e7b441927d9'
2021-03-09 00:00:41 INFO: resuming VM again after 38 seconds
2021-03-09 00:00:41 INFO: virtio1: dirty-bitmap status: created new
2021-03-09 00:00:44 INFO:   0% (404.0 MiB of 50.0 GiB) in 3s, read: 134.7 MiB/s, write: 6.7 MiB/s
2021-03-09 00:00:51 INFO:   1% (532.0 MiB of 50.0 GiB) in 10s, read: 18.3 MiB/s, write: 585.1 KiB/s
2021-03-09 00:01:06 INFO:   2% (1.1 GiB of 50.0 GiB) in 25s, read: 38.7 MiB/s, write: 16.3 MiB/s
2021-03-09 00:01:18 INFO:   3% (1.6 GiB of 50.0 GiB) in 37s, read: 40.3 MiB/s, write: 15.0 MiB/s
....
2021-03-09 01:20:36 INFO:  98% (49.1 GiB of 50.0 GiB) in 1h 19m 55s, read: 149.3 MiB/s, write: 0 B/s
2021-03-09 01:20:40 INFO:  99% (49.6 GiB of 50.0 GiB) in 1h 19m 59s, read: 148.0 MiB/s, write: 0 B/s
2021-03-09 01:20:43 INFO: 100% (50.0 GiB of 50.0 GiB) in 1h 20m 2s, read: 125.3 MiB/s, write: 0 B/s
2021-03-09 01:20:44 INFO: backup is sparse: 60.00 MiB (0%) total zero data
2021-03-09 01:20:44 INFO: backup was done incrementally, reused 46.12 GiB (92%)
2021-03-09 01:20:44 INFO: transferred 50.00 GiB in 4803 seconds (10.7 MiB/s)
2021-03-09 01:20:44 INFO: Finished Backup of VM 107 (01:20:42)

here a log from a host where the backup failed and the VM was not started after trying backup.

Code:
2021-03-09 00:32:04 INFO: Starting Backup of VM 202 (qemu)
2021-03-09 00:32:04 INFO: status = running
2021-03-09 00:32:04 INFO: backup mode: stop
2021-03-09 00:32:04 INFO: ionice priority: 7
2021-03-09 00:32:04 INFO: VM Name: xxx
2021-03-09 00:32:04 INFO: include disk 'virtio0' 'sandata2:vm-202-disk-0' 5G
2021-03-09 00:32:04 INFO: stopping vm
2021-03-09 00:32:08 INFO: creating Proxmox Backup Server archive 'vm/202/2021-03-08T23:32:04Z'
2021-03-09 00:32:08 INFO: starting kvm to execute backup task
2021-03-09 00:33:10 ERROR: VM 202 qmp command 'backup' failed - got timeout
2021-03-09 00:33:10 INFO: aborting backup job
2021-03-09 00:35:50 ERROR: VM 202 qmp command 'backup-cancel' failed - got wrong command id '1283872:19' (expected 1283872:20)
2021-03-09 00:35:50 INFO: guest is online again after 226 seconds
2021-03-09 00:35:50 ERROR: Backup of VM 202 failed - VM 202 qmp command 'backup' failed - got timeout

i found the problem is not just at one node.
the problem is just with PBS. Before that i made normal Backuos in Proxmox without a problem.

And when i start the Backup manuall from Proxmox GUI it works fine...

Is anything different from a manually started backup to a planned backup job?
 
New Backup try this weekend and still not working from one node with different errors :-(

01.png

VM 302:
Log from Proxmox VE:

Code:
2021-03-14 00:45:17 INFO: Starting Backup of VM 302 (qemu)
2021-03-14 00:45:17 INFO: status = running
2021-03-14 00:45:17 INFO: VM Name: xxx
2021-03-14 00:45:17 INFO: include disk 'virtio0' 'sandata2:vm-302-disk-0' 81924M
2021-03-14 00:45:18 INFO: backup mode: snapshot
2021-03-14 00:45:18 INFO: ionice priority: 7
2021-03-14 00:45:18 INFO: creating Proxmox Backup Server archive 'vm/302/2021-03-13T23:45:17Z'
2021-03-14 00:45:18 INFO: issuing guest-agent 'fs-freeze' command
2021-03-14 00:47:07 INFO: issuing guest-agent 'fs-thaw' command
2021-03-14 00:47:07 INFO: started backup task '9acd9ffd-cd20-4734-9e5c-aab2759e34e5'
2021-03-14 00:47:07 INFO: resuming VM again
2021-03-14 00:47:10 ERROR: VM 302 qmp command 'cont' failed - got timeout
2021-03-14 00:47:10 INFO: aborting backup job
2021-03-14 00:47:19 ERROR: Backup of VM 302 failed - VM 302 qmp command 'cont' failed - got timeout

Log from PBS:
Code:
Mar 14 00:45:22 pbs proxmox-backup-proxy[928]: starting new backup on datastore 'xxx': "vm/302/2021-03-13T23:45:17Z"
Mar 14 00:45:22 pbs proxmox-backup-proxy[928]: download 'index.json.blob' from previous backup.
Mar 14 00:45:22 pbs proxmox-backup-proxy[928]: register chunks in 'drive-virtio0.img.fidx' from previous backup.
Mar 14 00:45:22 pbs proxmox-backup-proxy[928]: download 'drive-virtio0.img.fidx' from previous backup.
Mar 14 00:45:22 pbs proxmox-backup-proxy[928]: created new fixed index 1 ("vm/302/2021-03-13T23:45:17Z/drive-virtio0.img.fidx")
Mar 14 00:47:07 pbs proxmox-backup-proxy[928]: add blob "/mnt/xxx/vm/302/2021-03-13T23:45:17Z/qemu-server.conf.blob" (387 bytes, comp: 387)
Mar 14 00:47:20 pbs proxmox-backup-proxy[928]: backup ended and finish failed: backup ended but finished flag is not set.
Mar 14 00:47:20 pbs proxmox-backup-proxy[928]: removing unfinished backup
Mar 14 00:47:20 pbs proxmox-backup-proxy[928]: removing backup snapshot "/mnt/xxx/vm/302/2021-03-13T23:45:17Z"
Mar 14 00:47:20 pbs proxmox-backup-proxy[928]: TASK ERROR: backup ended but finished flag is not set.
Mar 14 00:47:20 pbs proxmox-backup-proxy[928]: Detected stopped task 'UPID:pbs:000003A0:00000594:00000087:604D4E92:backup:xxx\x3avm-302:backup@pbs:'


VM 303:
Log from Proxmox VE:

Code:
2021-03-14 00:47:19 INFO: Starting Backup of VM 303 (qemu)
2021-03-14 00:47:19 INFO: status = running
2021-03-14 00:47:19 INFO: VM Name: xxx
2021-03-14 00:47:19 INFO: include disk 'virtio0' 'sandata2:vm-303-disk-0' 40G
2021-03-14 00:47:20 INFO: backup mode: snapshot
2021-03-14 00:47:20 INFO: ionice priority: 7
2021-03-14 00:47:20 INFO: creating Proxmox Backup Server archive 'vm/303/2021-03-13T23:47:19Z'
2021-03-14 00:47:20 INFO: issuing guest-agent 'fs-freeze' command
2021-03-14 00:49:27 INFO: issuing guest-agent 'fs-thaw' command
2021-03-14 00:49:27 ERROR: VM 303 qmp command 'backup' failed - got timeout
2021-03-14 00:49:27 INFO: aborting backup job
2021-03-14 00:50:16 ERROR: VM 303 qmp command 'backup-cancel' failed - got wrong command id '2192716:4819' (expected 2192716:4820)
2021-03-14 00:50:16 ERROR: Backup of VM 303 failed - VM 303 qmp command 'backup' failed - got timeout

Log from PBS:
Code:
Mar 14 00:47:31 pbs proxmox-backup-proxy[928]: created new fixed index 1 ("vm/303/2021-03-13T23:47:19Z/drive-virtio0.img.fidx")
Mar 14 00:48:41 pbs proxmox-backup-proxy[928]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Mar 14 00:48:41 pbs proxmox-backup-proxy[928]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
...
...
Mar 14 00:49:51 pbs proxmox-backup-proxy[928]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Mar 14 00:49:51 pbs proxmox-backup-proxy[928]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Mar 14 00:49:52 pbs proxmox-backup-proxy[928]: POST /fixed_chunk: 400 Bad Request: backup already marked as finished.
Mar 14 00:50:16 pbs proxmox-backup-proxy[928]: add blob "/mnt/xxx/vm/303/2021-03-13T23:47:19Z/qemu-server.conf.blob" (400 bytes, comp: 400)

edit:
Syslog from Proxmox VE

Code:
Mar 14 00:40:17 xxx pvestatd[2754]: VM 306 qmp command failed - VM 306 qmp command 'query-proxmox-support' failed - unable to connect to VM 306 qmp socket - timeout after 31 retries
Mar 14 00:40:20 xxx pvestatd[2754]: VM 307 qmp command failed - VM 307 qmp command 'query-proxmox-support' failed - unable to connect to VM 307 qmp socket - timeout after 31 retries
Mar 14 00:40:23 xxx pvestatd[2754]: VM 311 qmp command failed - VM 311 qmp command 'query-proxmox-support' failed - unable to connect to VM 311 qmp socket - timeout after 31 retries
Mar 14 00:40:26 xxx pvestatd[2754]: VM 305 qmp command failed - VM 305 qmp command 'query-proxmox-support' failed - unable to connect to VM 305 qmp socket - timeout after 31 retries
Mar 14 00:40:27 xxx pvestatd[2754]: status update time (25.357 seconds)
Mar 14 00:40:42 xxx pvestatd[2754]: VM 311 qmp command failed - VM 311 qmp command 'query-proxmox-support' failed - unable to connect to VM 311 qmp socket - timeout after 31 retries
Mar 14 00:40:45 xxx pvestatd[2754]: VM 305 qmp command failed - VM 305 qmp command 'query-proxmox-support' failed - unable to connect to VM 305 qmp socket - timeout after 31 retries
Mar 14 00:40:48 xxx pvestatd[2754]: VM 306 qmp command failed - VM 306 qmp command 'query-proxmox-support' failed - unable to connect to VM 306 qmp socket - timeout after 31 retries
Mar 14 00:40:51 xxx pvestatd[2754]: VM 307 qmp command failed - VM 307 qmp command 'query-proxmox-support' failed - unable to connect to VM 307 qmp socket - timeout after 31 retries
Mar 14 00:40:52 xxx pvestatd[2754]: status update time (25.381 seconds)
Mar 14 00:41:00 xxx systemd[1]: Starting Proxmox VE replication runner...
Mar 14 00:41:01 xxx systemd[1]: pvesr.service: Succeeded.
Mar 14 00:41:01 xxx systemd[1]: Started Proxmox VE replication runner.
Mar 14 00:41:07 xxx pvestatd[2754]: VM 305 qmp command failed - VM 305 qmp command 'query-proxmox-support' failed - unable to connect to VM 305 qmp socket - timeout after 31 retries
Mar 14 00:41:11 xxx pvestatd[2754]: VM 311 qmp command failed - VM 311 qmp command 'query-proxmox-support' failed - unable to connect to VM 311 qmp socket - timeout after 31 retries
Mar 14 00:41:14 xxx pvestatd[2754]: VM 307 qmp command failed - VM 307 qmp command 'query-proxmox-support' failed - unable to connect to VM 307 qmp socket - timeout after 31 retries
Mar 14 00:41:17 xxx pvestatd[2754]: VM 306 qmp command failed - VM 306 qmp command 'query-proxmox-support' failed - unable to connect to VM 306 qmp socket - timeout after 31 retries
Mar 14 00:41:19 xxx pvestatd[2754]: status update time (26.362 seconds)
Mar 14 00:41:34 xxx pvestatd[2754]: VM 305 qmp command failed - VM 305 qmp command 'query-proxmox-support' failed - unable to connect to VM 305 qmp socket - timeout after 31 retries
Mar 14 00:41:37 xxx pvestatd[2754]: VM 311 qmp command failed - VM 311 qmp command 'query-proxmox-support' failed - unable to connect to VM 311 qmp socket - timeout after 31 retries
Mar 14 00:41:40 xxx pvestatd[2754]: VM 307 qmp command failed - VM 307 qmp command 'query-proxmox-support' failed - unable to connect to VM 307 qmp socket - timeout after 31 retries
Mar 14 00:41:43 xxx pvestatd[2754]: VM 306 qmp command failed - VM 306 qmp command 'query-proxmox-support' failed - unable to connect to VM 306 qmp socket - timeout after 31 retries
Mar 14 00:41:43 xxx pvestatd[2754]: status update time (24.633 seconds)
Mar 14 00:41:58 xxx pvestatd[2754]: VM 306 qmp command failed - VM 306 qmp command 'query-proxmox-support' failed - unable to connect to VM 306 qmp socket - timeout after 31 retries
Mar 14 00:42:00 xxx systemd[1]: Starting Proxmox VE replication runner...
Mar 14 00:42:01 xxx systemd[1]: pvesr.service: Succeeded.
Mar 14 00:42:01 xxx systemd[1]: Started Proxmox VE replication runner.
Mar 14 00:42:01 xxx pvestatd[2754]: VM 307 qmp command failed - VM 307 qmp command 'query-proxmox-support' failed - unable to connect to VM 307 qmp socket - timeout after 31 retries
Mar 14 00:42:05 xxx pvestatd[2754]: VM 311 qmp command failed - VM 311 qmp command 'query-proxmox-support' failed - unable to connect to VM 311 qmp socket - timeout after 31 retries
Mar 14 00:42:08 xxx pvestatd[2754]: VM 305 qmp command failed - VM 305 qmp command 'query-proxmox-support' failed - unable to connect to VM 305 qmp socket - timeout after 31 retries
Mar 14 00:42:09 xxx pvestatd[2754]: status update time (25.259 seconds)
Mar 14 00:42:24 xxx pvestatd[2754]: VM 307 qmp command failed - VM 307 qmp command 'query-proxmox-support' failed - unable to connect to VM 307 qmp socket - timeout after 31 retries
Mar 14 00:42:27 xxx pvestatd[2754]: VM 306 qmp command failed - VM 306 qmp command 'query-proxmox-support' failed - unable to connect to VM 306 qmp socket - timeout after 31 retries
Mar 14 00:42:30 xxx pvestatd[2754]: VM 305 qmp command failed - VM 305 qmp command 'query-proxmox-support' failed - unable to connect to VM 305 qmp socket - timeout after 31 retries
Mar 14 00:42:33 xxx pvestatd[2754]: VM 311 qmp command failed - VM 311 qmp command 'query-proxmox-support' failed - unable to connect to VM 311 qmp socket - timeout after 31 retries
Mar 14 00:42:33 xxx pvestatd[2754]: status update time (24.627 seconds)
Mar 14 00:42:48 xxx pvestatd[2754]: VM 305 qmp command failed - VM 305 qmp command 'query-proxmox-support' failed - unable to connect to VM 305 qmp socket - timeout after 31 retries
Mar 14 00:42:51 xxx pvestatd[2754]: VM 311 qmp command failed - VM 311 qmp command 'query-proxmox-support' failed - unable to connect to VM 311 qmp socket - timeout after 31 retries
Mar 14 00:42:54 xxx pvestatd[2754]: VM 307 qmp command failed - VM 307 qmp command 'query-proxmox-support' failed - unable to connect to VM 307 qmp socket - timeout after 31 retries
Mar 14 00:42:57 xxx pvestatd[2754]: VM 306 qmp command failed - VM 306 qmp command 'query-proxmox-support' failed - unable to connect to VM 306 qmp socket - timeout after 31 retries
Mar 14 00:42:58 xxx pvestatd[2754]: status update time (24.635 seconds)
Mar 14 00:43:00 xxx systemd[1]: Starting Proxmox VE replication runner...
Mar 14 00:43:01 xxx systemd[1]: pvesr.service: Succeeded.

any idea what's the problem?
manual startet Backups are working without any problem...
 
Last edited:
can me tell someone which is the exactly command which is called from a scheduled backup that i can test the command?
 
ah thanks fpr the vzdump.cron
I'm trying it with this command.

But one question.
the vzdump.cron exists in every node.
On which node is the command startet? the first node?

edit:
in the syslog i see the process is startet on every node, on every node will then just made the backups form vm's which are on this host
 
Last edited:
yes, exactly - the guest list is filtered by which guests are currently present on the node that vzdump is executing on.
 
also, please make sure to get the full backtrace of all threads with the command from my second link! thanks :)
 
they just print some traces once you triggered the issue - they don't change anything. it would speed up investigation of this issue, since we're unable to reproduce it locally so far.
 
in Deutsch ists mal einfacher :)
Das ist eine VM, die beim Taskbackup hängt und ich dann neu starten muss.

Das Backup mit dem vzdump Befehl hat gerade probemlos funktioniert... bin kurz davor den einen node neu aufzusetzen.

Code:
gdb attach $VM_PID -ex='thread apply all bt' -ex='quit'
GNU gdb (Debian 8.2.1-2+b3) 8.2.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
attach: No such file or directory.
Attaching to process 2743
[New LWP 2744]
[New LWP 2766]
[New LWP 2767]
[New LWP 2768]
[New LWP 2769]
[New LWP 2771]
[New LWP 2773]
[New LWP 3327938]
[New LWP 3333112]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f803f1c3916 in __GI_ppoll (fds=0x5626c62788c0, nfds=139, timeout=<optimized out>, timeout@entry=0x7ffda82cd950, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39
39      ../sysdeps/unix/sysv/linux/ppoll.c: No such file or directory.

Thread 10 (Thread 0x7f7df27fc700 (LWP 3333112)):
#0  futex_abstimed_wait_cancelable (private=0, abstime=0x7f7df27f81e0, expected=0, futex_word=0x5626c6540bac) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  __pthread_cond_wait_common (abstime=0x7f7df27f81e0, mutex=0x5626c6540b58, cond=0x5626c6540b80) at pthread_cond_wait.c:539
#2  __pthread_cond_timedwait (cond=cond@entry=0x5626c6540b80, mutex=mutex@entry=0x5626c6540b58, abstime=abstime@entry=0x7f7df27f81e0) at pthread_cond_wait.c:667
#3  0x00005626c4266866 in qemu_sem_timedwait (sem=sem@entry=0x5626c6540b58, ms=ms@entry=10000) at ../util/qemu-thread-posix.c:282
#4  0x00005626c425fcc5 in worker_thread (opaque=opaque@entry=0x5626c6540ae0) at ../util/thread-pool.c:91
#5  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#6  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#7  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 9 (Thread 0x7f7dd1df9700 (LWP 3327938)):
#0  futex_abstimed_wait_cancelable (private=0, abstime=0x7f7dd1df51e0, expected=0, futex_word=0x5626c6540bac) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  __pthread_cond_wait_common (abstime=0x7f7dd1df51e0, mutex=0x5626c6540b58, cond=0x5626c6540b80) at pthread_cond_wait.c:539
#2  __pthread_cond_timedwait (cond=cond@entry=0x5626c6540b80, mutex=mutex@entry=0x5626c6540b58, abstime=abstime@entry=0x7f7dd1df51e0) at pthread_cond_wait.c:667
#3  0x00005626c4266866 in qemu_sem_timedwait (sem=sem@entry=0x5626c6540b58, ms=ms@entry=10000) at ../util/qemu-thread-posix.c:282
#4  0x00005626c425fcc5 in worker_thread (opaque=opaque@entry=0x5626c6540ae0) at ../util/thread-pool.c:91
#5  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#6  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#7  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 8 (Thread 0x7f7e229ff700 (LWP 2773)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x5626c6fa99fc) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x5626c6fa9a08, cond=0x5626c6fa99d0) at pthread_cond_wait.c:502
#2  __pthread_cond_wait (cond=cond@entry=0x5626c6fa99d0, mutex=mutex@entry=0x5626c6fa9a08) at pthread_cond_wait.c:655
#3  0x00005626c426639f in qemu_cond_wait_impl (cond=0x5626c6fa99d0, mutex=0x5626c6fa9a08, file=0x5626c428f486 "../ui/vnc-jobs.c", line=215) at ../util/qemu-thread-posix.c:174
#4  0x00005626c3de171d in vnc_worker_thread_loop (queue=queue@entry=0x5626c6fa99d0) at ../ui/vnc-jobs.c:215
#5  0x00005626c3de1fb8 in vnc_worker_thread (arg=arg@entry=0x5626c6fa99d0) at ../ui/vnc-jobs.c:325
#6  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#7  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 7 (Thread 0x7f7e23dff700 (LWP 2771)):
#0  0x00007f803f1c3819 in __GI___poll (fds=0x7f7e180269f0, nfds=2, timeout=2147483647) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007f8040b9a136 in ?? () from /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
#2  0x00007f8040b9a4c2 in g_main_loop_run () from /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
#3  0x00007f8040ecf147 in ?? () from /usr/lib/x86_64-linux-gnu/libspice-server.so.1
#4  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#5  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 6 (Thread 0x7f80313ff700 (LWP 2769)):
#0  0x00007f803f1c5427 in ioctl () at ../sysdeps/unix/syscall-template.S:78
#1  0x00005626c403d36c in kvm_vcpu_ioctl (cpu=cpu@entry=0x5626c62d32b0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:2654
#2  0x00005626c403d4b2 in kvm_cpu_exec (cpu=cpu@entry=0x5626c62d32b0) at ../accel/kvm/kvm-all.c:2491
#3  0x00005626c4056725 in kvm_vcpu_thread_fn (arg=arg@entry=0x5626c62d32b0) at ../accel/kvm/kvm-cpus.c:49
#4  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#5  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 5 (Thread 0x7f8031fff700 (LWP 2768)):
#0  0x00007f803f1c5427 in ioctl () at ../sysdeps/unix/syscall-template.S:78
#1  0x00005626c403d36c in kvm_vcpu_ioctl (cpu=cpu@entry=0x5626c62acd00, type=type@entry=44672) at ../accel/kvm/kvm-all.c:2654
#2  0x00005626c403d4b2 in kvm_cpu_exec (cpu=cpu@entry=0x5626c62acd00) at ../accel/kvm/kvm-all.c:2491
#3  0x00005626c4056725 in kvm_vcpu_thread_fn (arg=arg@entry=0x5626c62acd00) at ../accel/kvm/kvm-cpus.c:49
#4  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#5  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 4 (Thread 0x7f8032bff700 (LWP 2767)):
#0  0x00007f803f1c5427 in ioctl () at ../sysdeps/unix/syscall-template.S:78
#1  0x00005626c403d36c in kvm_vcpu_ioctl (cpu=cpu@entry=0x5626c62857a0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:2654
#2  0x00005626c403d4b2 in kvm_cpu_exec (cpu=cpu@entry=0x5626c62857a0) at ../accel/kvm/kvm-all.c:2491
#3  0x00005626c4056725 in kvm_vcpu_thread_fn (arg=arg@entry=0x5626c62857a0) at ../accel/kvm/kvm-cpus.c:49
#4  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#5  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 3 (Thread 0x7f80337c5700 (LWP 2766)):
#0  0x00007f803f1c5427 in ioctl () at ../sysdeps/unix/syscall-template.S:78
--Type <RET> for more, q to quit, c to continue without paging--
#1  0x00005626c403d36c in kvm_vcpu_ioctl (cpu=cpu@entry=0x5626c6234740, type=type@entry=44672) at ../accel/kvm/kvm-all.c:2654
#2  0x00005626c403d4b2 in kvm_cpu_exec (cpu=cpu@entry=0x5626c6234740) at ../accel/kvm/kvm-all.c:2491
#3  0x00005626c4056725 in kvm_vcpu_thread_fn (arg=arg@entry=0x5626c6234740) at ../accel/kvm/kvm-cpus.c:49
#4  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#5  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 2 (Thread 0x7f8034283700 (LWP 2744)):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x00005626c4266abb in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at ../util/qemu-thread-posix.c:456
#2  qemu_event_wait (ev=ev@entry=0x5626c4783fe8 <rcu_call_ready_event>) at ../util/qemu-thread-posix.c:460
#3  0x00005626c425504a in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:258
#4  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#5  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 1 (Thread 0x7f80343e1340 (LWP 2743)):
#0  0x00007f803f1c3916 in __GI_ppoll (fds=0x5626c62788c0, nfds=139, timeout=<optimized out>, timeout@entry=0x7ffda82cd950, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39
#1  0x00005626c425b2f1 in ppoll (__ss=0x0, __timeout=0x7ffda82cd950, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/poll2.h:77
#2  qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=timeout@entry=1999241159) at ../util/qemu-timer.c:349
#3  0x00005626c426a795 in os_host_main_loop_wait (timeout=1999241159) at ../util/main-loop.c:239
#4  main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:520
#5  0x00005626c40470a1 in qemu_main_loop () at ../softmmu/vl.c:1678
#6  0x00005626c3db178e in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:50
A debugging session is active.

        Inferior 1 [process 2743] will be detached.
 
genau den output brauchen wir, wenn die VM sich weghaengt - daraus laesst sich dann hoffentlich ablesen wo der deadlock ist.

(we need exactly this output, but when the VM is hung - hopefully it will tell us where exactly the deadlock is)
 
war ja klar, heute Nacht sind alle Backups auf den 3 Nodes druchgelaufen und alle VM's laufen noch :D
Werd es die nächsten Wochen weiter beobachten
 
jetzt ist es doch wieder passiert.
Die VM (Windows Server 2019) lief noch nach dem Backup hing aber und man kam nicht mehr drauf.

Code:
gdb attach $VM_PID -ex='thread apply all bt' -ex='quit'
GNU gdb (Debian 8.2.1-2+b3) 8.2.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
attach: No such file or directory.
Attaching to process 2743
[New LWP 2744]
[New LWP 2766]
[New LWP 2767]
[New LWP 2768]
[New LWP 2769]
[New LWP 2771]
[New LWP 2773]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
__lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
103     ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S: No such file or directory.

Thread 8 (Thread 0x7f7e229ff700 (LWP 2773)):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x5626c6fa99fc) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x5626c6fa9a08, cond=0x5626c6fa99d0) at pthread_cond_wait.c:502
#2  __pthread_cond_wait (cond=cond@entry=0x5626c6fa99d0, mutex=mutex@entry=0x5626c6fa9a08) at pthread_cond_wait.c:655
#3  0x00005626c426639f in qemu_cond_wait_impl (cond=0x5626c6fa99d0, mutex=0x5626c6fa9a08, file=0x5626c428f486 "../ui/vnc-jobs.c", line=215) at ../util/qemu-thread-posix.c:174
#4  0x00005626c3de171d in vnc_worker_thread_loop (queue=queue@entry=0x5626c6fa99d0) at ../ui/vnc-jobs.c:215
#5  0x00005626c3de1fb8 in vnc_worker_thread (arg=arg@entry=0x5626c6fa99d0) at ../ui/vnc-jobs.c:325
#6  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#7  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#8  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 7 (Thread 0x7f7e23dff700 (LWP 2771)):
#0  0x00007f803f1c3819 in __GI___poll (fds=0x7f7e180269f0, nfds=2, timeout=2147483647) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007f8040b9a136 in ?? () from /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
#2  0x00007f8040b9a4c2 in g_main_loop_run () from /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
#3  0x00007f8040ecf147 in ?? () from /usr/lib/x86_64-linux-gnu/libspice-server.so.1
#4  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#5  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 6 (Thread 0x7f80313ff700 (LWP 2769)):
#0  __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f803f2a0714 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x5626c4767d80 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x00005626c4265e39 in qemu_mutex_lock_impl (mutex=0x5626c4767d80 <qemu_global_mutex>, file=0x5626c4381528 "../softmmu/physmem.c", line=2729) at ../util/qemu-thread-posix.c:79
#3  0x00005626c40a866f in qemu_mutex_lock_iothread_impl (file=file@entry=0x5626c4381528 "../softmmu/physmem.c", line=line@entry=2729) at ../softmmu/cpus.c:485
#4  0x00005626c409138e in prepare_mmio_access (mr=<optimized out>) at ../softmmu/physmem.c:2729
#5  0x00005626c4093dab in flatview_read_continue (fv=fv@entry=0x7f80281d5120, addr=addr@entry=57666, attrs=..., ptr=ptr@entry=0x7f803438a000, len=len@entry=2, addr1=<optimized out>, l=<optimized out>, mr=0x5626c6f4a1a0)
    at ../softmmu/physmem.c:2820
#6  0x00005626c4094013 in flatview_read (fv=0x7f80281d5120, addr=addr@entry=57666, attrs=attrs@entry=..., buf=buf@entry=0x7f803438a000, len=len@entry=2) at ../softmmu/physmem.c:2862
#7  0x00005626c4094150 in address_space_read_full (as=0x5626c4767aa0 <address_space_io>, addr=57666, attrs=..., buf=0x7f803438a000, len=2) at ../softmmu/physmem.c:2875
#8  0x00005626c40942b5 in address_space_rw (as=<optimized out>, addr=addr@entry=57666, attrs=..., attrs@entry=..., buf=<optimized out>, len=len@entry=2, is_write=is_write@entry=false) at ../softmmu/physmem.c:2903
#9  0x00005626c403d764 in kvm_handle_io (count=1, size=2, direction=<optimized out>, data=<optimized out>, attrs=..., port=57666) at ../accel/kvm/kvm-all.c:2285
#10 kvm_cpu_exec (cpu=cpu@entry=0x5626c62d32b0) at ../accel/kvm/kvm-all.c:2531
#11 0x00005626c4056725 in kvm_vcpu_thread_fn (arg=arg@entry=0x5626c62d32b0) at ../accel/kvm/kvm-cpus.c:49
#12 0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#13 0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#14 0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 5 (Thread 0x7f8031fff700 (LWP 2768)):
#0  0x00007f803f1c5427 in ioctl () at ../sysdeps/unix/syscall-template.S:78
#1  0x00005626c403d36c in kvm_vcpu_ioctl (cpu=cpu@entry=0x5626c62acd00, type=type@entry=44672) at ../accel/kvm/kvm-all.c:2654
#2  0x00005626c403d4b2 in kvm_cpu_exec (cpu=cpu@entry=0x5626c62acd00) at ../accel/kvm/kvm-all.c:2491
#3  0x00005626c4056725 in kvm_vcpu_thread_fn (arg=arg@entry=0x5626c62acd00) at ../accel/kvm/kvm-cpus.c:49
#4  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#5  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
--Type <RET> for more, q to quit, c to continue without paging--

Thread 4 (Thread 0x7f8032bff700 (LWP 2767)):
#0  0x00007f803f1c5427 in ioctl () at ../sysdeps/unix/syscall-template.S:78
#1  0x00005626c403d36c in kvm_vcpu_ioctl (cpu=cpu@entry=0x5626c62857a0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:2654
#2  0x00005626c403d4b2 in kvm_cpu_exec (cpu=cpu@entry=0x5626c62857a0) at ../accel/kvm/kvm-all.c:2491
#3  0x00005626c4056725 in kvm_vcpu_thread_fn (arg=arg@entry=0x5626c62857a0) at ../accel/kvm/kvm-cpus.c:49
#4  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#5  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 3 (Thread 0x7f80337c5700 (LWP 2766)):
#0  __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f803f2a0714 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x5626c4767d80 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x00005626c4265e39 in qemu_mutex_lock_impl (mutex=0x5626c4767d80 <qemu_global_mutex>, file=0x5626c4381528 "../softmmu/physmem.c", line=2729) at ../util/qemu-thread-posix.c:79
#3  0x00005626c40a866f in qemu_mutex_lock_iothread_impl (file=file@entry=0x5626c4381528 "../softmmu/physmem.c", line=line@entry=2729) at ../softmmu/cpus.c:485
#4  0x00005626c409138e in prepare_mmio_access (mr=<optimized out>) at ../softmmu/physmem.c:2729
#5  0x00005626c4091479 in flatview_write_continue (fv=fv@entry=0x7f80281d5120, addr=addr@entry=502, attrs=..., ptr=ptr@entry=0x7f8041056000, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x5626c6bea380)
    at ../softmmu/physmem.c:2754
#6  0x00005626c4091626 in flatview_write (fv=0x7f80281d5120, addr=addr@entry=502, attrs=attrs@entry=..., buf=buf@entry=0x7f8041056000, len=len@entry=1) at ../softmmu/physmem.c:2799
#7  0x00005626c4094220 in address_space_write (as=0x5626c4767aa0 <address_space_io>, addr=addr@entry=502, attrs=..., buf=0x7f8041056000, len=len@entry=1) at ../softmmu/physmem.c:2891
#8  0x00005626c40942aa in address_space_rw (as=<optimized out>, addr=addr@entry=502, attrs=..., attrs@entry=..., buf=<optimized out>, len=len@entry=1, is_write=is_write@entry=true) at ../softmmu/physmem.c:2901
#9  0x00005626c403d764 in kvm_handle_io (count=1, size=1, direction=<optimized out>, data=<optimized out>, attrs=..., port=502) at ../accel/kvm/kvm-all.c:2285
#10 kvm_cpu_exec (cpu=cpu@entry=0x5626c6234740) at ../accel/kvm/kvm-all.c:2531
#11 0x00005626c4056725 in kvm_vcpu_thread_fn (arg=arg@entry=0x5626c6234740) at ../accel/kvm/kvm-cpus.c:49
#12 0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#13 0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#14 0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 2 (Thread 0x7f8034283700 (LWP 2744)):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x00005626c4266abb in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at ../util/qemu-thread-posix.c:456
#2  qemu_event_wait (ev=ev@entry=0x5626c4783fe8 <rcu_call_ready_event>) at ../util/qemu-thread-posix.c:460
#3  0x00005626c425504a in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:258
#4  0x00005626c4265c2a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521
#5  0x00007f803f29dfa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
#6  0x00007f803f1ce4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 1 (Thread 0x7f80343e1340 (LWP 2743)):
#0  __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103
#1  0x00007f803f2a0714 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x5626c614be98) at ../nptl/pthread_mutex_lock.c:80
#2  0x00005626c4265e39 in qemu_mutex_lock_impl (mutex=0x5626c614be98, file=0x5626c43c7149 "../monitor/qmp.c", line=80) at ../util/qemu-thread-posix.c:79
#3  0x00005626c41eb686 in monitor_qmp_cleanup_queue_and_resume (mon=0x5626c614bd80) at ../monitor/qmp.c:80
#4  monitor_qmp_event (opaque=0x5626c614bd80, event=<optimized out>) at ../monitor/qmp.c:421
#5  0x00005626c41e9505 in tcp_chr_disconnect_locked (chr=0x5626c5e92fa0) at ../chardev/char-socket.c:507
#6  0x00005626c41e9550 in tcp_chr_disconnect (chr=0x5626c5e92fa0) at ../chardev/char-socket.c:517
#7  0x00005626c41e959e in tcp_chr_hup (channel=<optimized out>, cond=<optimized out>, opaque=<optimized out>) at ../chardev/char-socket.c:557
#8  0x00007f8040b99dd8 in g_main_context_dispatch () from /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
#9  0x00005626c426a848 in glib_pollfds_poll () at ../util/main-loop.c:221
#10 os_host_main_loop_wait (timeout=<optimized out>) at ../util/main-loop.c:244
#11 main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:520
#12 0x00005626c40470a1 in qemu_main_loop () at ../softmmu/vl.c:1678
#13 0x00005626c3db178e in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:50
A debugging session is active.

        Inferior 1 [process 2743] will be detached.

Quit anyway? (y or n)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!