Snapshot causes VM to become unresponsive.

FYI, the fix is included in pve-qemu-kvm >= 9.0.2-4 currently available in the testing repository: https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_test_repo

If you'd like to install the package, you can temporarily enable the repository (e.g. via the Repositories section in the UI), run apt update, run apt install pve-qemu-kvm and disable the repository again, then run apt update again.

EDIT: a VM has to be shutdown and started again (Reboot button in the UI also works, reboot inside the guest is not enough) or migrated to an upgraded node to start using the new version.
 
Last edited:
Hi Fiona, Not to be a pain but I can't run test repos on production environment. I'm presuming it will make its way there at some point. I've only recently moved over to Proxmox from VMWare so am unfamiliar with the timescales for releases.

Thanks.
 
Hi Fiona, Not to be a pain but I can't run test repos on production environment. I'm presuming it will make its way there at some point. I've only recently moved over to Proxmox from VMWare so am unfamiliar with the timescales for releases.

Thanks.
You don't need to run the whole testing repository. Just install the single package and disable it again. Otherwise, you can also downgrade the QEMU package for the time being. I can't give you an ETA for when it will be in the enterprise repository, because I won't be the one moving it there, but I'd expect it to be about 1-2 weeks.
 
Hi @fiona.
We have this problem as well.
Host CPU AMD EPYC 7313 running PVE 8.2.7 from Enterprise repo with pve-qemu-kvm 9.0.2-3.
The VM we are experiencing this with is a q35 machine with x86-64-v2-AES CPU running Rocky Linux 8.
Virtual disks (virtio-scsi-single, discard, iothread, io_uring) are backed by Ceph pools (NVME and SATA).
At some point during the snapshot (with memory) we observe (with atop in the VM) incredibly high I/O pressure (utilization & wait), rendering the VM unusable.
Machine state returns to normal after reboot (which naturally takes ages).
No problem when snapshotting without memory.
Can provide further details if needed.
 
Hi,
Hi @fiona.
We have this problem as well.
Host CPU AMD EPYC 7313 running PVE 8.2.7 from Enterprise repo with pve-qemu-kvm 9.0.2-3.
The VM we are experiencing this with is a q35 machine with x86-64-v2-AES CPU running Rocky Linux 8.
Virtual disks (virtio-scsi-single, discard, iothread, io_uring) are backed by Ceph pools (NVME and SATA).
At some point during the snapshot (with memory) we observe (with atop in the VM) incredibly high I/O pressure (utilization & wait), rendering the VM unusable.
Machine state returns to normal after reboot (which naturally takes ages).
No problem when snapshotting without memory.
Can provide further details if needed.
if you can't hold off from snapshots with memory until a fix is available in the enterprise repository, I'd suggest to downgrade QEMU
apt install pve-qemu-kvm=8.1.5-6. You can also try the proposed fix by upgrading just the QEMU package to the version from the testing repository: https://forum.proxmox.com/threads/snapshot-causes-vm-to-become-unresponsive.153483/post-719882
 
Hi,

if you can't hold off from snapshots with memory until a fix is available in the enterprise repository, I'd suggest to downgrade QEMU
apt install pve-qemu-kvm=8.1.5-6. You can also try the proposed fix by upgrading just the QEMU package to the version from the testing repository: https://forum.proxmox.com/threads/snapshot-causes-vm-to-become-unresponsive.153483/post-719
Thanks Fiona
The update to pve-qemu-kvm 9.0.2-4 fixed the problem for us.
Best regards
Stefan
 
  • Like
Reactions: fiona
Hi,
i tested the pve-qemu-kvm 9.0.2-4 on my second cluster machine and no more saturation/stalling with my tests cases.
It seams OK now

I have a question @fiona concerning this fixed issue, the bug seams to be propagated via updated packages to Enterprise repository as mentioned by @dave10x and @Stefan Radman, then could you explain us the timeline between updated packages in test/pve-no-subscription/enterprise and the first detection of problem (this thread) and the resolution via package.
The goal is to now how many times the bug was present and active in different repositories, this will give us some more intel, than https://pve.proxmox.com/wiki/Package_Repositories, on how your update system work.
Regards.
 
  • Like
Reactions: dave10x
Hi,
Hi,
i tested the pve-qemu-kvm 9.0.2-4 on my second cluster machine and no more saturation/stalling with my tests cases.
It seams OK now

I have a question @fiona concerning this fixed issue, the bug seams to be propagated via updated packages to Enterprise repository as mentioned by @dave10x and @Stefan Radman, then could you explain us the timeline between updated packages in test/pve-no-subscription/enterprise and the first detection of problem (this thread) and the resolution via package.
The goal is to now how many times the bug was present and active in different repositories, this will give us some more intel, than https://pve.proxmox.com/wiki/Package_Repositories, on how your update system work.
Regards.
unfortunately, the cause of the issue was identified (October 29) only after the affected pve-qemu-kvm package was already on the enterprise repository (October 21). There was not much noise about the issue, just a couple of reports in the community forum and by the sheer number of users, we do get such reports all the time (in relation to qcow2 snapshots on slow NFS storages for example). So it was not clear that it's a general issue and it was not even clear that it is an issue in the QEMU package (could've been kernel too for example). That only became clear after reports that downgrading the QEMU package helped (also October 29). If you have further questions, please refer to the enterprise support.

EDIT: forgot to mention, that the package with the fix should be available in the following days in the enterprise repository. I can't give you an exact time, because I'm not the one deciding the exact time.
 
Last edited:
Hallo, that didn't help in my case. Snapshots with and without RAM results in a freeze of the VM.

Apt is up-to-date

CPU(s) 12 x AMD Ryzen 5 PRO 3600 6-Core Processor (1 Socket)
Kernel version Linux 6.8.12-8-pve (2025-01-24T12:32Z)
Boot mode EFI
Manager-Version pve-manager/8.3.4/65224a0f9cd294a3

Code:
root@4D4033C:~# dpkg -l | grep pve-qemu
ii  pve-qemu-kvm                         9.0.2-5                             amd64        Full virtualization on x86 hardware
 
Hi,
Hallo, that didn't help in my case. Snapshots with and without RAM results in a freeze of the VM.

Apt is up-to-date

CPU(s) 12 x AMD Ryzen 5 PRO 3600 6-Core Processor (1 Socket)
Kernel version Linux 6.8.12-8-pve (2025-01-24T12:32Z)
Boot mode EFI
Manager-Version pve-manager/8.3.4/65224a0f9cd294a3

Code:
root@4D4033C:~# dpkg -l | grep pve-qemu
ii  pve-qemu-kvm                         9.0.2-5                             amd64        Full virtualization on x86 hardware
please share the output of pveversion -v as well as the VM configuration qm config <ID> and the output of qm status <ID> --verbose while the VM is frozen. The snapshot task log would also be interesting. What exactly does "freeze" mean? I.e. black/frozen screen, no network response or something else?
 
with freeze I meant: vm is locked (status snapshot), no connection to console. Snapshot takes more than 5 minutes. No connection via SSH / SMB

Code:
root@4D4033C:~# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.8.12-8-pve)
pve-manager: 8.3.4 (running version: 8.3.4/65224a0f9cd294a3)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-8
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2+deb12u1
dnsmasq: 2.90-4~deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.2.0
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.3-1
proxmox-backup-file-restore: 3.3.3-1
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.4
pve-cluster: 8.0.10
pve-container: 5.2.4
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-3
pve-ha-manager: 4.0.6
pve-i18n: 3.3.3
pve-qemu-kvm: 9.0.2-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.8
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0

Code:
root@4D4033C:~# qm config 100
agent: 1
boot: order=ide2;scsi0
cores: 3
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 8000
meta: creation-qemu=8.1.5,ctime=1740153721
name: FreelancerStorage
net0: virtio=BC:24:11:DB:E2:AB,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local:100/vm-100-disk-0.qcow2,format=qcow2
scsihw: virtio-scsi-single
smbios1: uuid=df14e223-1184-44b6-96ed-1274a137280f
sockets: 3
vmgenid: 8b9d7e9f-3e37-4680-9f28-fd7a26c8f94d

Code:
root@4D4033C:~# qm status 100 --verbose
balloon: 8388608000
ballooninfo:
        actual: 8388608000
        free_mem: 2244050944
        last_update: 1740484756
        major_page_faults: 1019
        max_mem: 8388608000
        mem_swapped_in: 0
        mem_swapped_out: 0
        minor_page_faults: 1916303
        total_mem: 8125947904
blockstat:
        ide2:
                account_failed: 1
                account_invalid: 1
                failed_flush_operations: 0
                failed_rd_operations: 0
                failed_unmap_operations: 0
                failed_wr_operations: 0
                failed_zone_append_operations: 0
                flush_operations: 0
                flush_total_time_ns: 0
                idle_time_ns: 15672575976068
                invalid_flush_operations: 0
                invalid_rd_operations: 0
                invalid_unmap_operations: 0
                invalid_wr_operations: 0
                invalid_zone_append_operations: 0
                rd_bytes: 92
                rd_merged: 0
                rd_operations: 4
                rd_total_time_ns: 62560
                timed_stats:
                unmap_bytes: 0
                unmap_merged: 0
                unmap_operations: 0
                unmap_total_time_ns: 0
                wr_bytes: 0
                wr_highest_offset: 0
                wr_merged: 0
                wr_operations: 0
                wr_total_time_ns: 0
                zone_append_bytes: 0
                zone_append_merged: 0
                zone_append_operations: 0
                zone_append_total_time_ns: 0
        scsi0:
                account_failed: 1
                account_invalid: 1
                failed_flush_operations: 0
                failed_rd_operations: 0
                failed_unmap_operations: 0
                failed_wr_operations: 0
                failed_zone_append_operations: 0
                flush_operations: 19547
                flush_total_time_ns: 767690339916
                idle_time_ns: 2472191543
                invalid_flush_operations: 0
                invalid_rd_operations: 0
                invalid_unmap_operations: 0
                invalid_wr_operations: 0
                invalid_zone_append_operations: 0
                rd_bytes: 3417439232
                rd_merged: 0
                rd_operations: 162519
                rd_total_time_ns: 1444162873528
                timed_stats:
                unmap_bytes: 0
                unmap_merged: 0
                unmap_operations: 0
                unmap_total_time_ns: 0
                wr_bytes: 33588420608
                wr_highest_offset: 6222232289280
                wr_merged: 0
                wr_operations: 65243
                wr_total_time_ns: 8938840290915
                zone_append_bytes: 0
                zone_append_merged: 0
                zone_append_operations: 0
                zone_append_total_time_ns: 0
cpus: 9
disk: 0
diskread: 3417439324
diskwrite: 33588420608
freemem: 2244050944
maxdisk: 0
maxmem: 8388608000
mem: 5881896960
name: FreelancerStorage
netin: 24711251445
netout: 12526896290
nics:
        tap100i0:
                netin: 24711251445
                netout: 12526896290
pid: 2685
proxmox-support:
        backup-fleecing: 1
        backup-max-workers: 1
        pbs-dirty-bitmap: 1
        pbs-dirty-bitmap-migration: 1
        pbs-dirty-bitmap-savevm: 1
        pbs-library-version: 1.5.1 (UNKNOWN)
        pbs-masterkey: 1
        query-bitmap-info: 1
qmpstatus: running
running-machine: pc-i440fx-9.0+pve0
running-qemu: 9.0.2
status: running
uptime: 57418
vmid: 100
 
with freeze I meant: vm is locked (status snapshot), no connection to console. Snapshot takes more than 5 minutes. No connection via SSH / SMB
Code:
root@4D4033C:~# qm config 100
agent: 1
boot: order=ide2;scsi0
cores: 3
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 8000
meta: creation-qemu=8.1.5,ctime=1740153721
name: FreelancerStorage
net0: virtio=BC:24:11:DB:E2:AB,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local:100/vm-100-disk-0.qcow2,format=qcow2
scsihw: virtio-scsi-single
smbios1: uuid=df14e223-1184-44b6-96ed-1274a137280f
sockets: 3
vmgenid: 8b9d7e9f-3e37-4680-9f28-fd7a26c8f94d
Code:
                wr_highest_offset: 6222232289280
Seems like the disk is very large, ~6 TiB at least? While the disk snapshot is done, the VM is paused to guarantee a consistent state. Maybe it does take longer than 5 minutes? What kind of storage is local, how fast is the underlying physical disk?

You could run apt install pve-qemu-kvm-dbgsym gdb. To see what the QEMU process is doing while the freeze is happening, run gdb --batch --ex 't a a bt' -p $(cat /var/run/qemu-server/100.pid)
 
Used space is 5.1 TiB. Local storage is Linux RAID on 2 ST16000NM000J-2T HDDs
A 1 GiB test file can be written with 190 MiB/s.

I'll split the output into two posts
Bash:
root@4D4033C:/var/lib/vz/images# gdb --batch --ex 't a a bt' -p $(cat /var/run/qemu-server/100.pid)
[New LWP 2686]
[New LWP 2763]
[New LWP 2764]
[New LWP 2765]
[New LWP 2766]
[New LWP 2767]
[New LWP 2768]
[New LWP 2769]
[New LWP 2770]
[New LWP 2771]
[New LWP 2772]
[New LWP 2774]
[New LWP 191766]
[New LWP 191770]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x0000751d9bf56b95 in ?? () from /lib/x86_64-linux-gnu/liburing.so.2

Thread 15 (Thread 0x751d984e0480 (LWP 191770) "iou-wrk-2685"):
#0  0x0000000000000000 in ?? ()
Backtrace stopped: Cannot access memory at address 0x0

Thread 14 (Thread 0x751d984e0480 (LWP 191766) "iou-wrk-2685"):
#0  0x0000000000000000 in ?? ()
Backtrace stopped: Cannot access memory at address 0x0

Thread 13 (Thread 0x751b8ae006c0 (LWP 2774) "vnc_worker"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x5d9563712748) at ./nptl/futex-internal.
c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x5d9563712748, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime
@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x0000751d9aedef7b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5d9563712748, expected=expected@entry=0, clockid
=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x0000751d9aee15d8 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5d9563712758, cond=0x5d9563712720) at ./nptl/pthread_cond_w
ait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x5d9563712720, mutex=mutex@entry=0x5d9563712758) at ./nptl/pthread_cond_wait.c:618
#5  0x00005d955e53a1db in qemu_cond_wait_impl (cond=0x5d9563712720, mutex=0x5d9563712758, file=0x5d955e5ee034 "../ui/vnc-jobs.c", line=248) at ..
/util/qemu-thread-posix.c:225
#6  0x00005d955df540fb in vnc_worker_thread_loop (queue=queue@entry=0x5d9563712720) at ../ui/vnc-jobs.c:248
#7  0x00005d955df54dd8 in vnc_worker_thread (arg=arg@entry=0x5d9563712720) at ../ui/vnc-jobs.c:362
#8  0x00005d955e5395e8 in qemu_thread_start (args=0x5d95637127b0) at ../util/qemu-thread-posix.c:541
#9  0x0000751d9aee21c4 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x0000751d9af6285c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 12 (Thread 0x751b928006c0 (LWP 2772) "CPU 8/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x5d956289985c) at ./nptl/futex-internal.
c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x5d956289985c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime
@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x0000751d9aedef7b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5d956289985c, expected=expected@entry=0, clockid
=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x0000751d9aee15d8 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5d955f409c00 <bql>, cond=0x5d9562899830) at ./nptl/pthread_
cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x5d9562899830, mutex=mutex@entry=0x5d955f409c00 <bql>) at ./nptl/pthread_cond_wait.c:618
#5  0x00005d955e53a1db in qemu_cond_wait_impl (cond=0x5d9562899830, mutex=0x5d955f409c00 <bql>, file=0x5d955e6aa9f3 "../system/cpus.c", line=451)
at ../util/qemu-thread-posix.c:225
#6  0x00005d955e15e56e in qemu_wait_io_event (cpu=cpu@entry=0x5d95628906d0) at ../system/cpus.c:451
#7  0x00005d955e3828a8 in kvm_vcpu_thread_fn (arg=arg@entry=0x5d95628906d0) at ../accel/kvm/kvm-accel-ops.c:55
#8  0x00005d955e5395e8 in qemu_thread_start (args=0x5d9562899870) at ../util/qemu-thread-posix.c:541
#9  0x0000751d9aee21c4 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x0000751d9af6285c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 11 (Thread 0x751b932006c0 (LWP 2771) "CPU 7/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x5d956288fcec) at ./nptl/futex-internal.
c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x5d956288fcec, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime
@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x0000751d9aedef7b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5d956288fcec, expected=expected@entry=0, clockid
=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x0000751d9aee15d8 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5d955f409c00 <bql>, cond=0x5d956288fcc0) at ./nptl/pthread_
cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x5d956288fcc0, mutex=mutex@entry=0x5d955f409c00 <bql>) at ./nptl/pthread_cond_wait.c:618
#5  0x00005d955e53a1db in qemu_cond_wait_impl (cond=0x5d956288fcc0, mutex=0x5d955f409c00 <bql>, file=0x5d955e6aa9f3 "../system/cpus.c", line=451)
at ../util/qemu-thread-posix.c:225
#6  0x00005d955e15e56e in qemu_wait_io_event (cpu=cpu@entry=0x5d9562886ad0) at ../system/cpus.c:451
#7  0x00005d955e3828a8 in kvm_vcpu_thread_fn (arg=arg@entry=0x5d9562886ad0) at ../accel/kvm/kvm-accel-ops.c:55
#8  0x00005d955e5395e8 in qemu_thread_start (args=0x5d956288fd00) at ../util/qemu-thread-posix.c:541
#9  0x0000751d9aee21c4 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x0000751d9af6285c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 10 (Thread 0x751b93c006c0 (LWP 2770) "CPU 6/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x5d956288609c) at ./nptl/futex-internal.
c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x5d956288609c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime
@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x0000751d9aedef7b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5d956288609c, expected=expected@entry=0, clockid
=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x0000751d9aee15d8 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5d955f409c00 <bql>, cond=0x5d9562886070) at ./nptl/pthread_
cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x5d9562886070, mutex=mutex@entry=0x5d955f409c00 <bql>) at ./nptl/pthread_cond_wait.c:618
#5  0x00005d955e53a1db in qemu_cond_wait_impl (cond=0x5d9562886070, mutex=0x5d955f409c00 <bql>, file=0x5d955e6aa9f3 "../system/cpus.c", line=451)
at ../util/qemu-thread-posix.c:225
#6  0x00005d955e15e56e in qemu_wait_io_event (cpu=cpu@entry=0x5d956287cf10) at ../system/cpus.c:451
#7  0x00005d955e3828a8 in kvm_vcpu_thread_fn (arg=arg@entry=0x5d956287cf10) at ../accel/kvm/kvm-accel-ops.c:55
#8  0x00005d955e5395e8 in qemu_thread_start (args=0x5d95628860b0) at ../util/qemu-thread-posix.c:541
#9  0x0000751d9aee21c4 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x0000751d9af6285c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 9 (Thread 0x751d8cc006c0 (LWP 2769) "CPU 5/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x5d956287c4d8) at ./nptl/futex-internal.
c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x5d956287c4d8, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime
@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x0000751d9aedef7b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5d956287c4d8, expected=expected@entry=0, clockid
=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x0000751d9aee15d8 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5d955f409c00 <bql>, cond=0x5d956287c4b0) at ./nptl/pthread_
cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x5d956287c4b0, mutex=mutex@entry=0x5d955f409c00 <bql>) at ./nptl/pthread_cond_wait.c:618
#5  0x00005d955e53a1db in qemu_cond_wait_impl (cond=0x5d956287c4b0, mutex=0x5d955f409c00 <bql>, file=0x5d955e6aa9f3 "../system/cpus.c", line=451)
at ../util/qemu-thread-posix.c:225
#6  0x00005d955e15e56e in qemu_wait_io_event (cpu=cpu@entry=0x5d9562873350) at ../system/cpus.c:451
#7  0x00005d955e3828a8 in kvm_vcpu_thread_fn (arg=arg@entry=0x5d9562873350) at ../accel/kvm/kvm-accel-ops.c:55
#8  0x00005d955e5395e8 in qemu_thread_start (args=0x5d956287c4f0) at ../util/qemu-thread-posix.c:541
#9  0x0000751d9aee21c4 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x0000751d9af6285c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 8 (Thread 0x751d8d6006c0 (LWP 2768) "CPU 4/KVM"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x5d956287225c) at ./nptl/futex-internal.
c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x5d956287225c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime
@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x0000751d9aedef7b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x5d956287225c, expected=expected@entry=0, clockid
=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x0000751d9aee15d8 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x5d955f409c00 <bql>, cond=0x5d9562872230) at ./nptl/pthread_
cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x5d9562872230, mutex=mutex@entry=0x5d955f409c00 <bql>) at ./nptl/pthread_cond_wait.c:618
#5  0x00005d955e53a1db in qemu_cond_wait_impl (cond=0x5d9562872230, mutex=0x5d955f409c00 <bql>, file=0x5d955e6aa9f3 "../system/cpus.c", line=451)
at ../util/qemu-thread-posix.c:225
#6  0x00005d955e15e56e in qemu_wait_io_event (cpu=cpu@entry=0x5d9562869390) at ../system/cpus.c:451
#7  0x00005d955e3828a8 in kvm_vcpu_thread_fn (arg=arg@entry=0x5d9562869390) at ../accel/kvm/kvm-accel-ops.c:55
#8  0x00005d955e5395e8 in qemu_thread_start (args=0x5d9562872270) at ../util/qemu-thread-posix.c:541
#9  0x0000751d9aee21c4 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x0000751d9af6285c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81