Error with backups (or other any action on proxmox) - qmp command failed

mug3nx

New Member
Jan 16, 2024
9
0
1
Hi everyone,

I've been facing an issue with Proxmox for the past few days – every backup job seems to be failing. Upon closer investigation, I discovered that I'm unable to perform any actions, such as migrating VMs, moving disks, or taking snapshots, even via the command line.

When I check the logs for the backup job:

Code:
Jan 16 18:32:34 pve01 pvedaemon[1752172]: INFO: Starting Backup of VM 100 (qemu)
Jan 16 18:32:43 pve01 pvedaemon[989908]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - unable to connect to VM 100 qmp socket - timeout after 51 retries
Jan 16 18:32:45 pve01 pvestatd[1537]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - unable to connect to VM 100 qmp socket - timeout after 51 retries
Jan 16 18:32:48 pve01 pvestatd[1537]: got timeout
Jan 16 18:32:48 pve01 pvestatd[1537]: status update time (10.966 seconds)
Jan 16 18:32:56 pve01 pvestatd[1537]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - unable to connect to VM 100 qmp socket - timeout after 51 retries

If I attempt a "manual backup":

Code:
root@pve01:/mnt/pve/nas01-bck/dump# vzdump 100 --dumpdir /mnt/pve/nas01-bck/dump --mode snapshot
INFO: starting new backup job: vzdump 100 --dumpdir /mnt/pve/nas01-bck/dump --mode snapshot
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2024-01-16 18:50:04
INFO: status = running
INFO: VM Name: WM-100
INFO: include disk 'scsi0' 'nas01-bck:100/vm-100-disk-0.qcow2' 500G
INFO: include disk 'efidisk0' 'nas01-bck:100/vm-100-disk-1.qcow2' 528K
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: snapshots found (not included into backup)
INFO: creating vzdump archive '/mnt/pve/nas01-bck/dump/vzdump-qemu-100-2024_01_16-18_50_04.vma'
INFO: started backup task '52fd8055-4bdd-47fb-aac2-80cc2948e1f6'
INFO: resuming VM again

ERROR: VM 100 qmp command 'cont' failed - got timeout
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 100 failed - VM 100 qmp command 'cont' failed - got timeout
INFO: Failed at 2024-01-16 18:54:50
INFO: Backup job finished with errors

The vm.conf:

Code:
agent: 0
args: -smbios type=11,value=x -smbios type=1,uuid=6a943e8f-d6bc-4746-8193-ca6f5d432bc0,manufacturer=FUJITSU.
bios: ovmf
boot: order=scsi0
cores: 16
efidisk0: nas01-bck:100/vm-100-disk-1.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
machine: pc-q35-7.1
memory: 12288
meta: creation-qemu=7.1.0,ctime=1670938195
name: vm-100
net0: e1000=XX:XX:XX:XX:XX:XX,bridge=vmbr1,firewall=1
numa: 0
onboot: 1
ostype: win11
scsi0: nas01-bck:100/vm-100-disk-0.qcow2,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=6a943e8f-d6bc-4746-8193-ca6f5d432bc0
sockets: 1
vmgenid: f7b35265-ed98-4bbe-9ee4-1cb35f29bc58

The pve version i'm running:

Code:
root@pve01:~# pveversion  -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.0.0-1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.5-6
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1

Actually the vm is working fine although i cant do any action on the vm. i think might be an issue with the QMP commands.

Any help is welcome

Greetings
 
Hi,
I've been facing an issue with Proxmox for the past few days – every backup job seems to be failing. Upon closer investigation, I discovered that I'm unable to perform any actions, such as migrating VMs, moving disks, or taking snapshots, even via the command line.
did you already shutdown+start the VM after this started happening?
Code:
root@pve01:~# pveversion  -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
It's rather old, I'd suggest you upgrade to current 7.4 or 8.1 and see if the issue still happens:
https://pve.proxmox.com/wiki/Package_Repositories
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#system_software_updates
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
 
Hi Fiona,

Thanks for your response.
did you already shutdown+start the VM after this started happening?

Yes, I did try that, but the only way to achieve it is by executing "qm stop <vm-id>" and then starting the VM again. It doesn't respond to a shutdown signal. However, even after trying this, the issue persists. I also attempted to reboot the node and restart the services related to pvedaemon, pveproxy, and pvestatd.

It's rather old, I'd suggest you upgrade to current 7.4 or 8.1 and see if the issue still happens:
I am considering running an upgrade, rebooting the node, and then starting the VM.

I'll keep you updated with any developments.

Best regards,
 
Hi Fiona,

Thanks for your response.


Yes, I did try that, but the only way to achieve it is by executing "qm stop <vm-id>" and then starting the VM again. It doesn't respond to a shutdown signal. However, even after trying this, the issue persists. I also attempted to reboot the node and restart the services related to pvedaemon, pveproxy, and pvestatd.
How does the CPU/RAM/IO load on the host look like when you do attempt the actions? Maybe setting a bandwidth limit will help. Can be done for most actions in Datacenter > Options > Bandwidth Limits. For a backup job it can only be done via CLI currently, (use cat /etc/pve/jobs.cfg to see the job's ID) e.g. pvesh set /cluster/backup/backup-e9ee601b-41ad --bwlimit <value in in KiB/s>
 
Hi,

During the test, I sometimes encounter a situation where the VM's CPU usage almost reaches 100%, causing the VM to stop working. In such cases, I need to perform a reboot, and afterward, the CPU returns to normal operation.
I'm planning to attempt a backup with bandwidth limits, although I suspect the issue may not be related to bandwidth since other tasks, such as migration, also fail to complete. I am currently awaiting a maintenance window to address this issue.

Best regards,
 
Hi Fiona,

I finally managed to upgrade Proxmox. Right now, I'm running:

Code:
pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.136-1-pve)
pve-manager: 7.4-17 (running version: 7.4-17/513c62be)
pve-kernel-5.15: 7.4-10
pve-kernel-helper: 7.2-14
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.136-1-pve: 5.15.136-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.6-1
proxmox-backup-file-restore: 2.4.6-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+2
pve-firewall: 4.3-5
pve-firmware: 3.6-6
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.14-pve1

However, the QEMU commands still aren't working properly; I'm still encountering issues.
Code:
"qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries"

Also, upon looking into the syslog, I noticed the following:
Code:
kernel: [ 3524.145521] x86/split lock detection: #AC: CPU 0/KVM/1770 took a split_lock trap at address: 0xfffff8014476215f
I dont know if this could be in relation with the actual issue.


Additionally, when I execute 'journalctl -b'
Code:
x86/split lock detection: #AC: crashing the kernel on kernel split_locks and warning on user-space split_locks

Greetings
 
Update: i disabled split lock detection in grub but still having the issue, so i guess is not kernel problem.
 
However, the QEMU commands still aren't working properly; I'm still encountering issues.
Code:
"qmp command 'query-proxmox-support' failed - unable to connect to VM 101 qmp socket - timeout after 51 retries"
Next time it happens, please install debugger and debug symbols with apt install pve-qemu-kvm-dbg gdb and share the output of gdb --batch --ex 't a a bt' -p $(cat /var/run/qemu-server/101.pid) and qm status 101 --verbose, both times replacing 101 with the ID of the VM in question.

How does the CPU/RAM/IO load on the host look like when you attempt the actions?

You can still attempt to upgrade to Proxmox VE 8. Version 7 is going to be end of life in a few months anyways.
https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
https://pve.proxmox.com/wiki/FAQ

EDIT: mention that debugger and symbols need to be installed first
 
Last edited:
I have some updates; the issue seems to be related to the Windows Server OS running in Proxmox. With a new Debian VM, backups and other queries works well. Additionally, I installed a new Windows Server and queries are functioning properly. I was thinking it might be a guest agent issue, but I installed it on the production server with no luck, the issue still persist.

When i attempt this actions the CPU/RAM/IO looks normal, nothing weird.

Here i share you the debug when i attemp to do a backup:

qm status 101 --verbose

Code:
balloon: 12884901888
ballooninfo:
    actual: 12884901888
    max_mem: 12884901888
blockstat:
    efidisk0:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        flush_operations: 0
        flush_total_time_ns: 0
        invalid_flush_operations: 0
        invalid_rd_operations: 0
        invalid_unmap_operations: 0
        invalid_wr_operations: 0
        rd_bytes: 0
        rd_merged: 0
        rd_operations: 0
        rd_total_time_ns: 0
        timed_stats:
        unmap_bytes: 0
        unmap_merged: 0
        unmap_operations: 0
        unmap_total_time_ns: 0
        wr_bytes: 0
        wr_highest_offset: 29184
        wr_merged: 0
        wr_operations: 0
        wr_total_time_ns: 0
    pflash0:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        flush_operations: 0
        flush_total_time_ns: 0
        invalid_flush_operations: 0
        invalid_rd_operations: 0
        invalid_unmap_operations: 0
        invalid_wr_operations: 0
        rd_bytes: 0
        rd_merged: 0
        rd_operations: 0
        rd_total_time_ns: 0
        timed_stats:
        unmap_bytes: 0
        unmap_merged: 0
        unmap_operations: 0
        unmap_total_time_ns: 0
        wr_bytes: 0
        wr_highest_offset: 0
        wr_merged: 0
        wr_operations: 0
        wr_total_time_ns: 0
    scsi0:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        flush_operations: 19
        flush_total_time_ns: 16568344
        idle_time_ns: 4862896611778
        invalid_flush_operations: 0
        invalid_rd_operations: 0
        invalid_unmap_operations: 0
        invalid_wr_operations: 0
        rd_bytes: 471280640
        rd_merged: 0
        rd_operations: 973
        rd_total_time_ns: 4431894870
        timed_stats:
        unmap_bytes: 0
        unmap_merged: 0
        unmap_operations: 0
        unmap_total_time_ns: 0
        wr_bytes: 9728
        wr_highest_offset: 21452800
        wr_merged: 0
        wr_operations: 19
        wr_total_time_ns: 678986
cpus: 16
disk: 0
diskread: 471280640
diskwrite: 9728
maxdisk: 536870912000
maxmem: 12884901888
mem: 11118618424
name: WIN-SQL
netin: 28268
netout: 164
nics:
    tap101i0:
        netin: 28268
        netout: 164
pid: 223953
proxmox-support:
    backup-max-workers: 1
    pbs-dirty-bitmap: 1
    pbs-dirty-bitmap-migration: 1
    pbs-dirty-bitmap-savevm: 1
    pbs-library-version: 1.3.1 (4d450bb294cac5316d2f23bf087c4b02c0543d79)
    pbs-masterkey: 1
    query-bitmap-info: 1
qmpstatus: running
running-machine: pc-q35-7.1+pve0
running-qemu: 7.2.0
status: running
uptime: 4874
vmid: 101
root@pve02:~# qm status 101 --verbose
balloon: 12884901888
ballooninfo:
    actual: 12884901888
    max_mem: 12884901888
blockstat:
    efidisk0:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        flush_operations: 0
        flush_total_time_ns: 0
        invalid_flush_operations: 0
        invalid_rd_operations: 0
        invalid_unmap_operations: 0
        invalid_wr_operations: 0
        rd_bytes: 0
        rd_merged: 0
        rd_operations: 0
        rd_total_time_ns: 0
        timed_stats:
        unmap_bytes: 0
        unmap_merged: 0
        unmap_operations: 0
        unmap_total_time_ns: 0
        wr_bytes: 0
        wr_highest_offset: 29184
        wr_merged: 0
        wr_operations: 0
        wr_total_time_ns: 0
    pflash0:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        flush_operations: 0
        flush_total_time_ns: 0
        invalid_flush_operations: 0
        invalid_rd_operations: 0
        invalid_unmap_operations: 0
        invalid_wr_operations: 0
        rd_bytes: 0
        rd_merged: 0
        rd_operations: 0
        rd_total_time_ns: 0
        timed_stats:
        unmap_bytes: 0
        unmap_merged: 0
        unmap_operations: 0
        unmap_total_time_ns: 0
        wr_bytes: 0
        wr_highest_offset: 0
        wr_merged: 0
        wr_operations: 0
        wr_total_time_ns: 0
    scsi0:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        flush_operations: 19
        flush_total_time_ns: 16568344
        idle_time_ns: 4905131854807
        invalid_flush_operations: 0
        invalid_rd_operations: 0
        invalid_unmap_operations: 0
        invalid_wr_operations: 0
        rd_bytes: 471280640
        rd_merged: 0
        rd_operations: 973
        rd_total_time_ns: 4431894870
        timed_stats:
        unmap_bytes: 0
        unmap_merged: 0
        unmap_operations: 0
        unmap_total_time_ns: 0
        wr_bytes: 9728
        wr_highest_offset: 21452800
        wr_merged: 0
        wr_operations: 19
        wr_total_time_ns: 678986
cpus: 16
disk: 0
diskread: 471280640
diskwrite: 9728
maxdisk: 536870912000
maxmem: 12884901888
mem: 11118618424
name: WIN-SQL
netin: 28268
netout: 164
nics:
    tap101i0:
        netin: 28268
        netout: 164
pid: 223953
proxmox-support:
    backup-max-workers: 1
    pbs-dirty-bitmap: 1
    pbs-dirty-bitmap-migration: 1
    pbs-dirty-bitmap-savevm: 1
    pbs-library-version: 1.3.1 (4d450bb294cac5316d2f23bf087c4b02c0543d79)
    pbs-masterkey: 1
    query-bitmap-info: 1
qmpstatus: running
running-machine: pc-q35-7.1+pve0
running-qemu: 7.2.0
status: running
uptime: 4917
vmid: 101
root@pve02:~# qm status 101 --verbose
cpus: 16
disk: 0
diskread: 0
diskwrite: 0
lock: backup
maxdisk: 536870912000
maxmem: 12884901888
mem: 11115632576
name: WIN-SQL
netin: 28268
netout: 164
nics:
    tap101i0:
        netin: 28268
        netout: 164
pid: 223953
proxmox-support:
qmpstatus: running
status: running
uptime: 4952
vmid: 101
 
Last edited:
gdb --batch --ex 't a a bt' -p $(cat /var/run/qemu-server/101.pid)
Code:
[New LWP 223954]
[New LWP 224085]
[New LWP 224086]
[New LWP 224087]
[New LWP 224088]
[New LWP 224089]
[New LWP 224090]
[New LWP 224091]
[New LWP 224092]
[New LWP 224093]
[New LWP 224094]
[New LWP 224095]
[New LWP 224096]
[New LWP 224097]
[New LWP 224098]
[New LWP 224099]
[New LWP 224100]
[New LWP 224104]
[New LWP 224415]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
__lseek64 (fd=16, offset=offset@entry=77922304, whence=whence@entry=3) at ../sysdeps/unix/sysv/linux/lseek64.c:36
36    ../sysdeps/unix/sysv/linux/lseek64.c: No such file or directory.

Thread 20 (Thread 0x7f17ceb58700 (LWP 224415) "worker"):
#0  futex_abstimed_wait_cancelable (private=0, abstime=0x7f17ceb534c0, clockid=-826985504, expected=0, futex_word=0x55ed555d7fb0) at ../sysdeps/nptl/futex-internal.h:323
#1  __pthread_cond_wait_common (abstime=0x7f17ceb534c0, clockid=-826985504, mutex=0x55ed555d7f20, cond=0x55ed555d7f88) at pthread_cond_wait.c:520
#2  __pthread_cond_timedwait (cond=cond@entry=0x55ed555d7f88, mutex=mutex@entry=0x55ed555d7f20, abstime=abstime@entry=0x7f17ceb534c0) at pthread_cond_wait.c:656
#3  0x000055ed53617011 in qemu_cond_timedwait_ts (cond=cond@entry=0x55ed555d7f88, mutex=mutex@entry=0x55ed555d7f20, ts=ts@entry=0x7f17ceb534c0, file=file@entry=0x55ed53801c35 "../util/thread-pool.c", line=line@entry=90) at ../util/qemu-thread-posix.c:234
#4  0x000055ed53617be0 in qemu_cond_timedwait_impl (cond=0x55ed555d7f88, mutex=0x55ed555d7f20, ms=<optimized out>, file=0x55ed53801c35 "../util/thread-pool.c", line=90) at ../util/qemu-thread-posix.c:248
#5  0x000055ed5362b0b4 in worker_thread (opaque=opaque@entry=0x55ed555d7f10) at ../util/thread-pool.c:90
#6  0x000055ed53616e89 in qemu_thread_start (args=0x7f17ceb53570) at ../util/qemu-thread-posix.c:505
#7  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#8  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 19 (Thread 0x7f146b9bf700 (LWP 224104) "vnc_worker"):
#0  futex_wait_cancelable (private=0, expected=0, futex_word=0x55ed569ab1f8) at ../sysdeps/nptl/futex-internal.h:186
#1  __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55ed569ab208, cond=0x55ed569ab1d0) at pthread_cond_wait.c:508
#2  __pthread_cond_wait (cond=cond@entry=0x55ed569ab1d0, mutex=mutex@entry=0x55ed569ab208) at pthread_cond_wait.c:638
#3  0x000055ed536179cb in qemu_cond_wait_impl (cond=0x55ed569ab1d0, mutex=0x55ed569ab208, file=0x55ed5368e434 "../ui/vnc-jobs.c", line=248) at ../util/qemu-thread-posix.c:220
#4  0x000055ed530a65c3 in vnc_worker_thread_loop (queue=0x55ed569ab1d0) at ../ui/vnc-jobs.c:248
#5  0x000055ed530a7288 in vnc_worker_thread (arg=arg@entry=0x55ed569ab1d0) at ../ui/vnc-jobs.c:361
#6  0x000055ed53616e89 in qemu_thread_start (args=0x7f146b9ba570) at ../util/qemu-thread-posix.c:505
#7  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#8  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 18 (Thread 0x7f147bfff700 (LWP 224100) "CPU 15/KVM"):
#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120
#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55b451b0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035
#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55b451b0) at ../accel/kvm/kvm-all.c:2850
#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55b451b0) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f147bffa570) at ../util/qemu-thread-posix.c:505
#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 17 (Thread 0x7f1494fff700 (LWP 224099) "CPU 14/KVM"):
#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120
#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55b3d1e0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035
#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55b3d1e0) at ../accel/kvm/kvm-all.c:2850
#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55b3d1e0) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f1494ffa570) at ../util/qemu-thread-posix.c:505
#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 16 (Thread 0x7f1495bff700 (LWP 224098) "CPU 13/KVM"):
#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120
#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55b35250, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035
#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55b35250) at ../accel/kvm/kvm-all.c:2850
#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55b35250) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f1495bfa570) at ../util/qemu-thread-posix.c:505
#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 15 (Thread 0x7f14967ff700 (LWP 224097) "CPU 12/KVM"):
#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120
#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55b2d2c0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035
#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55b2d2c0) at ../accel/kvm/kvm-all.c:2850
#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55b2d2c0) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f14967fa570) at ../util/qemu-thread-posix.c:505
#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 14 (Thread 0x7f14973ff700 (LWP 224096) "CPU 11/KVM"):
#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120
#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55b245f0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035
#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55b245f0) at ../accel/kvm/kvm-all.c:2850
#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55b245f0) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f14973fa570) at ../util/qemu-thread-posix.c:505
#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 13 (Thread 0x7f1497fff700 (LWP 224095) "CPU 10/KVM"):
#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120
#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55b1c620, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035
#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55b1c620) at ../accel/kvm/kvm-all.c:2850
#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55b1c620) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f1497ffa570) at ../util/qemu-thread-posix.c:505
#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 12 (Thread 0x7f14bcdff700 (LWP 224094) "CPU 9/KVM"):
#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120
#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55b14690, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035
#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55b14690) at ../accel/kvm/kvm-all.c:2850
#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55b14690) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f14bcdfa570) at ../util/qemu-thread-posix.c:505
#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 11 (Thread 0x7f14bd9ff700 (LWP 224093) "CPU 8/KVM"):
#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120
#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55b0c700, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035
#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55b0c700) at ../accel/kvm/kvm-all.c:2850
#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55b0c700) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f14bd9fa570) at ../util/qemu-thread-posix.c:505
#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 10 (Thread 0x7f14be5ff700 (LWP 224092) "CPU 7/KVM"):
#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120
#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55b047c0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035
#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55b047c0) at ../accel/kvm/kvm-all.c:2850
#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55b047c0) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f14be5fa570) at ../util/qemu-thread-posix.c:505
#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 9 (Thread 0x7f14bf1ff700 (LWP 224091) "CPU 6/KVM"):
#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120
#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55afcab0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035
#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55afcab0) at ../accel/kvm/kvm-all.c:2850
#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55afcab0) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f14bf1fa570) at ../util/qemu-thread-posix.c:505
#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
 
Code:
Thread 8 (Thread 0x7f14bfdff700 (LWP 224090) "CPU 5/KVM"):

#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120

#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55af45d0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035

#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55af45d0) at ../accel/kvm/kvm-all.c:2850

#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55af45d0) at ../accel/kvm/kvm-accel-ops.c:51

#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f14bfdfa570) at ../util/qemu-thread-posix.c:505

#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477

#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95


Thread 7 (Thread 0x7f17c4dff700 (LWP 224089) "CPU 4/KVM"):

#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120

#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55aec940, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035

#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55aec940) at ../accel/kvm/kvm-all.c:2850

#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55aec940) at ../accel/kvm/kvm-accel-ops.c:51

#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f17c4dfa570) at ../util/qemu-thread-posix.c:505

#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477

#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95


Thread 6 (Thread 0x7f17c59ff700 (LWP 224088) "CPU 3/KVM"):

#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120

#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55ae4bc0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035

#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55ae4bc0) at ../accel/kvm/kvm-all.c:2850

#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55ae4bc0) at ../accel/kvm/kvm-accel-ops.c:51

#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f17c59fa570) at ../util/qemu-thread-posix.c:505

#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477

#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95


Thread 5 (Thread 0x7f17c65ff700 (LWP 224087) "CPU 2/KVM"):

#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120

#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55adceb0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035

#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55adceb0) at ../accel/kvm/kvm-all.c:2850

#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55adceb0) at ../accel/kvm/kvm-accel-ops.c:51

#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f17c65fa570) at ../util/qemu-thread-posix.c:505

#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477

#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95


Thread 4 (Thread 0x7f17c71ff700 (LWP 224086) "CPU 1/KVM"):

#0  0x00007f17da1cf277 in ioctl () at ../sysdeps/unix/syscall-template.S:120

#1  0x000055ed5348f997 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55ed55ad5350, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3035

#2  0x000055ed5348fb01 in kvm_cpu_exec (cpu=cpu@entry=0x55ed55ad5350) at ../accel/kvm/kvm-all.c:2850

#3  0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55ad5350) at ../accel/kvm/kvm-accel-ops.c:51

#4  0x000055ed53616e89 in qemu_thread_start (args=0x7f17c71fa570) at ../util/qemu-thread-posix.c:505

#5  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477

#6  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95


Thread 3 (Thread 0x7f17c7dbf700 (LWP 224085) "CPU 0/KVM"):

#0  __lll_lock_wait (futex=futex@entry=0x55ed53e57200 <qemu_global_mutex>, private=0) at lowlevellock.c:52

#1  0x00007f17da2bc843 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55ed53e57200 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80

#2  0x000055ed53617264 in qemu_mutex_lock_impl (mutex=0x55ed53e57200 <qemu_global_mutex>, file=0x55ed537824c0 "../softmmu/physmem.c", line=2765) at ../util/qemu-thread-posix.c:89

#3  0x000055ed532401c6 in qemu_mutex_lock_iothread_impl (file=file@entry=0x55ed537824c0 "../softmmu/physmem.c", line=line@entry=2765) at ../softmmu/cpus.c:503

#4  0x000055ed5340c3e6 in prepare_mmio_access (mr=<optimized out>) at ../softmmu/physmem.c:2765

#5  flatview_read_continue (fv=fv@entry=0x7f14b41f93e0, addr=addr@entry=53954, attrs=attrs@entry=..., ptr=ptr@entry=0x7f17dbeb3000, len=len@entry=2, addr1=<optimized out>, l=<optimized out>, mr=<optimized out>) at ../softmmu/physmem.c:2890

#6  0x000055ed5340c530 in flatview_read (fv=0x7f14b41f93e0, addr=addr@entry=53954, attrs=attrs@entry=..., buf=buf@entry=0x7f17dbeb3000, len=len@entry=2) at ../softmmu/physmem.c:2934

#7  0x000055ed5340c87d in address_space_read_full (len=2, buf=0x7f17dbeb3000, attrs=..., addr=53954, as=0x55ed53e5c020 <address_space_io>) at ../softmmu/physmem.c:2947

#8  address_space_rw (as=as@entry=0x55ed53e5c020 <address_space_io>, addr=addr@entry=53954, attrs=attrs@entry=..., buf=0x7f17dbeb3000, len=len@entry=2, is_write=is_write@entry=false) at ../softmmu/physmem.c:2975

#9  0x000055ed5348fd17 in kvm_handle_io (count=1, size=2, direction=<optimized out>, data=<optimized out>, attrs=..., port=53954) at ../accel/kvm/kvm-all.c:2639

#10 kvm_cpu_exec (cpu=cpu@entry=0x55ed55aa86c0) at ../accel/kvm/kvm-all.c:2890

#11 0x000055ed5349117d in kvm_vcpu_thread_fn (arg=arg@entry=0x55ed55aa86c0) at ../accel/kvm/kvm-accel-ops.c:51

#12 0x000055ed53616e89 in qemu_thread_start (args=0x7f17c7dba570) at ../util/qemu-thread-posix.c:505

#13 0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477

#14 0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95


Thread 2 (Thread 0x7f17cf45a700 (LWP 223954) "call_rcu"):

#0  __lll_lock_wait (futex=futex@entry=0x55ed53e57200 <qemu_global_mutex>, private=0) at lowlevellock.c:52

#1  0x00007f17da2bc843 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x55ed53e57200 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80

#2  0x000055ed53617264 in qemu_mutex_lock_impl (mutex=0x55ed53e57200 <qemu_global_mutex>, file=0x55ed537ff9e7 "../util/rcu.c", line=269) at ../util/qemu-thread-posix.c:89

#3  0x000055ed532401c6 in qemu_mutex_lock_iothread_impl (file=file@entry=0x55ed537ff9e7 "../util/rcu.c", line=line@entry=269) at ../softmmu/cpus.c:503

#4  0x000055ed5362096e in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:269

#5  0x000055ed53616e89 in qemu_thread_start (args=0x7f17cf455570) at ../util/qemu-thread-posix.c:505

#6  0x00007f17da2b9ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477

#7  0x00007f17da1d9a6f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95


Thread 1 (Thread 0x7f17cf5bc040 (LWP 223953) "kvm"):

#0  __lseek64 (fd=16, offset=offset@entry=77922304, whence=whence@entry=3) at ../sysdeps/unix/sysv/linux/lseek64.c:36

#1  0x000055ed5355f0bc in find_allocation (bs=0x55ed5588d380, bs=0x55ed5588d380, hole=<synthetic pointer>, data=<synthetic pointer>, start=77922304) at ../block/file-posix.c:2736

#2  raw_co_block_status (bs=0x55ed5588d380, want_zero=<optimized out>, offset=77922304, bytes=1048576, pnum=0x7f14975ffdd0, map=0x7f14975ffd10, file=0x7f14975ffd18) at ../block/file-posix.c:2839

#3  0x000055ed5350271c in bdrv_co_block_status (bs=0x55ed5588d380, want_zero=want_zero@entry=true, offset=77922304, bytes=1048576, pnum=pnum@entry=0x7f14975ffdd0, map=map@entry=0x0, file=0x0) at ../block/io.c:2482

#4  0x000055ed53502965 in bdrv_co_block_status (bs=bs@entry=0x55ed55887060, want_zero=want_zero@entry=true, offset=offset@entry=77594624, bytes=<optimized out>, bytes@entry=1048576, pnum=pnum@entry=0x7f14975fff70, map=map@entry=0x0, file=0x0) at ../block/io.c:2582

#5  0x000055ed5350584c in bdrv_co_common_block_status_above (bs=0x55ed55887060, bs@entry=0x100000, base=0x0, base@entry=0x7f14975fff70, include_base=false, want_zero=<optimized out>, offset=offset@entry=77594624, bytes=1048576, bytes@entry=0, pnum=0x7f14975fff70, map=0x0, file=0x0, depth=0x7f14975ffe74) at ../block/io.c:2649

#6  0x000055ed534d0f4b in bdrv_common_block_status_above (bs=0x100000, base=0x7f14975fff70, base@entry=0x0, include_base=include_base@entry=false, want_zero=want_zero@entry=true, offset=offset@entry=77594624, bytes=0, bytes@entry=1048576, pnum=<optimized out>, map=<optimized out>, file=<optimized out>, depth=<optimized out>) at block/block-gen.c:1074

#7  0x000055ed53505af0 in bdrv_block_status_above (bs=<optimized out>, base=base@entry=0x0, offset=offset@entry=77594624, bytes=bytes@entry=1048576, pnum=pnum@entry=0x7f14975fff70, map=map@entry=0x0, file=0x0) at ../block/io.c:2726

#8  0x000055ed534fb577 in block_copy_block_status (pnum=<synthetic pointer>, bytes=1048576, offset=<optimized out>, s=0x55ed55de2120) at ../block/block-copy.c:593

#9  block_copy_dirty_clusters (call_state=0x55ed56843f80) at ../block/block-copy.c:739

#10 block_copy_common (call_state=<optimized out>) at ../block/block-copy.c:833

#11 block_copy_async_co_entry (opaque=<optimized out>) at ../block/block-copy.c:888

#12 0x000055ed5362ae8b in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at ../util/coroutine-ucontext.c:177

#13 0x00007f17da12bd80 in ?? () from /lib/x86_64-linux-gnu/libc.so.6

#14 0x00007f17c7db98e0 in ?? ()

#15 0x0000000000000000 in ?? ()

[Inferior 1 (process 223953) detached]
 
Code:
Thread 1 (Thread 0x7f17cf5bc040 (LWP 223953) "kvm"):

#0  __lseek64 (fd=16, offset=offset@entry=77922304, whence=whence@entry=3) at ../sysdeps/unix/sysv/linux/lseek64.c:36

#1  0x000055ed5355f0bc in find_allocation (bs=0x55ed5588d380, bs=0x55ed5588d380, hole=<synthetic pointer>, data=<synthetic pointer>, start=77922304) at ../block/file-posix.c:2736

#2  raw_co_block_status (bs=0x55ed5588d380, want_zero=<optimized out>, offset=77922304, bytes=1048576, pnum=0x7f14975ffdd0, map=0x7f14975ffd10, file=0x7f14975ffd18) at ../block/file-posix.c:2839

Does this change in any way when you run the command a minute or so later? Otherwise, it's stuck right here, interacting with the underlying image file. Please check your storage and physical disk, e.g. with smartctl.

Code:
scsi0: nas01-bck:100/vm-100-disk-0.qcow2,size=500G
Assuming this is a network storage, is the backup running over the same network? Might also be hanging because of network. Did you already try setting bandwidth limits?
 
Hi Fiona,

I've managed to solve the problem. For some reason, the Proxmox storage was using CIFS, and when the VM disk exceeded 100 GB, the CIFS didn't function properly. I'm not sure if the issue lies within Proxmox or the destination storage, but with smaller disk sizes, CIFS works well. However, when the disk size exceeds 150 GB, it doesn't function properly.
I've migrated the storage to NFS, and everything is working smoothly now.
Thank you for your assistance.

Best regards,
 
Glad to hear :) But still strange. Haven't heard other reports about that (yet).