Random VM crashes with a SPICE vm: QEMU free(): corrupted unsorted chunks

garbled

Member
Feb 9, 2021
44
1
13
49
I'm starting to see random crashes with one of my VM's after upgrading to 8.x a few weeks ago. I thought it was maybe a one-off, but this one VM keeps crashing over and over. Prior to this, this particular VM had an uptime of months with constant use.

syslog on the proxmox server shows the following in the log. I'm particularly concerned about the first message here though, that looks like a bug in QEMU?

Code:
Feb 06 09:20:07 ukdah QEMU[2292880]: free(): corrupted unsorted chunks
Feb 06 09:20:07 ukdah spiceproxy[3276380]: worker exit
Feb 06 09:20:07 ukdah pvestatd[1881]: VM 146 qmp command failed - VM 146 not running
Feb 06 09:20:07 ukdah pvestatd[1881]: VM 146 qmp command failed - VM 146 not running
Feb 06 09:20:07 ukdah pvestatd[1881]: VM 146 not running
Feb 06 09:20:08 ukdah kernel: fwbr146i0: port 2(tap146i0) entered disabled state
Feb 06 09:20:08 ukdah kernel: tap146i0 (unregistering): left allmulticast mode
Feb 06 09:20:08 ukdah kernel: fwbr146i0: port 2(tap146i0) entered disabled state
Feb 06 09:20:09 ukdah systemd[1]: 146.scope: Deactivated successfully.
Feb 06 09:20:09 ukdah systemd[1]: 146.scope: Consumed 2w 4d 57min 5.493s CPU time.
Feb 06 09:20:09 ukdah qmeventd[2136974]: Starting cleanup for 146
Feb 06 09:20:09 ukdah kernel: fwbr146i0: port 1(fwln146i0) entered disabled state
Feb 06 09:20:09 ukdah kernel: vmbr0: port 8(fwpr146p0) entered disabled state
Feb 06 09:20:09 ukdah kernel: fwln146i0 (unregistering): left allmulticast mode
Feb 06 09:20:09 ukdah kernel: fwln146i0 (unregistering): left promiscuous mode
Feb 06 09:20:09 ukdah kernel: fwbr146i0: port 1(fwln146i0) entered disabled state
Feb 06 09:20:09 ukdah kernel: fwpr146p0 (unregistering): left allmulticast mode
Feb 06 09:20:09 ukdah kernel: fwpr146p0 (unregistering): left promiscuous mode
Feb 06 09:20:09 ukdah kernel: vmbr0: port 8(fwpr146p0) entered disabled state
Feb 06 09:20:10 ukdah qmeventd[2136974]: Finished cleanup for 146
Feb 06 09:20:14 ukdah corosync[1563]:   [TOTEM ] Retransmit List: 11157ca
Feb 06 09:20:17 ukdah pve-ha-lrm[2137247]: starting service vm:146
 
Again today with:

Feb 09 06:12:32 ukdah kernel: SPICE Worker[2137550]: segfault at 7f5f240007b0 ip 00007f666590aedd sp 00007f603ddf5dd0 error 4 in libc.so.6[7f666589c000+155000] likely on CPU 19 (core 1, socket 1)
Feb 09 06:12:32 ukdah kernel: Code: 08 48 8b 4f 08 48 89 c8 48 83 e0 f8 48 3b 04 07 0f 85 a9 00 00 00 f3 0f 6f 47 10 48 8b 57 18 66 48 0f 7e c0 48 3b 78 18 75 7b <48> 3b 7a 10 75 75 48 8b 77 10 48 89 50 18 66 0f d6 42 10 48 81 f9

It doesn't seem limited to this one host, if I move it to a different one, I also have the problem.
 
Hi,
please post the output of pveversion -v and qm config 146. Unfortunately, that sounds like a memory corruption issue, which are often very difficult to debug. A core dump might still help to identify the problematic area. You can run apt install systemd-coredump then one should be generated next time the crash happens.
 
Code:
root@ukdah:~# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.11-7-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-9
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
pve-kernel-5.0: 6.0-11
pve-kernel-5.15.131-2-pve: 5.15.131-3
pve-kernel-5.4.166-1-pve: 5.4.166-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph: 18.2.1-pve2
ceph-fuse: 18.2.1-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.4
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-3
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.2-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1


Code:
root@ukdah:~# qm config 146
agent: 1
audio0: device=ich9-intel-hda,driver=spice
balloon: 8096
boot: order=scsi0;net0
cores: 6
cpu: x86-64-v2-AES
description: * Bullseye template for debian workstation%0A* needs%3A spice-vdagent rxvt-unicode%0A* Workstation (polaris)
memory: 24448
meta: creation-qemu=6.1.1,ctime=1651942626
name: bullseye-04
net0: virtio=7E:8C:66:91:6D:66,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: vm_rbd:vm-146-disk-0,discard=on,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=8df17a61-829f-401c-8cb9-ce991922d8eb
sockets: 1
tags: ukdah
usb0: spice
usb1: spice
usb2: spice
usb3: spice
vga: qxl2,memory=128
vmgenid: 388b854f-8226-4b9c-9109-4526090ab0ca

The last few times it crashed (2-3 since last post), the message was the same segfault. I've installed the coredump tool, so will see if that produces anything. Thanks!

Also possibly worth noting, usually it happens when I click something in firefox on the VM. (not some random link on the interwebs, but things I do all the time like clicking around the proxmox gui, or logging into truenas, etc etc), but that could be a red herring since half my time is spent clicking i suppose.
 
same issue
Code:
# pveversion
pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-8-pve)
 
I'm try to use default display memory
1708082334660.png

Code:
# qm config 109
agent: 1
audio0: device=ich9-intel-hda,driver=spice
boot: order=virtio0
cores: 4
cpu: host
description: %D0%AE%D1%80%D0%B8%D0%B4%D0%B8%D1%87%D0%B5%D1%81%D0%BA%D0%B8%D0%B9 %D0%BE%D1%82%D0%B4%D0%B5%D0%BB
machine: pc-q35-8.1
memory: 4096
meta: creation-qemu=8.0.2,ctime=1701696669
name: VM
net0: virtio=BC:24:11:A4:92:4B,bridge=vmbr381,rate=12
numa: 0
onboot: 1
ostype: win10
protection: 1
scsihw: virtio-scsi-single
smbios1: uuid=4244bba2-a835-49cd-aff0-2848b1035fde
sockets: 1
usb0: spice
usb1: spice
vga: qxl2
virtio0: Infortrend:vm-109-disk-0,size=100G
vmgenid: 7d55ff75-a32d-40d2-950d-4f05f7c59c09
 
Hi,
same issue
Code:
# pveversion
pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-8-pve)
please share the exact error message you got. Could you also install systemd-coredump for next time?

@garbled @DeeMaas
Were there any USB devices used by SPICE before or during the crash happened? I tried to reproduce the issue mimicking your configs and playing around, but had no success thus far.
 
No usb on either side for this VM. The crash is super random. One day, it happened like 3 times, then, 2 weeks with no crash and I thought all was well again. Lately it's been every 2-3 days.

Basically, this VM is my daily driver workstation. So I use it *alot*. Haven't had the crash since I installed the coredump tool, but I would presume based on recent history in the next day or so, probably while I'm in the middle of something. :)

So far only the one crash had the free() error, all others have been the segfault.
 
Hi,

please share the exact error message you got. Could you also install systemd-coredump for next time?

@garbled @DeeMaas
Were there any USB devices used by SPICE before or during the crash happened? I tried to reproduce the issue mimicking your configs and playing around, but had no success thus far.
I also installed systemd-coredump. I will send you dump as soon as possible.
 
Code:
# coredumpctl list
TIME                         PID UID GID SIG     COREFILE EXE                         SIZE
Tue 2024-02-20 16:31:43 MSK 7899   0   0 SIGSEGV present  /usr/bin/qemu-system-x86_64 1.4G
 
Code:
# coredumpctl info
           PID: 7899 (kvm)
           UID: 0 (root)
           GID: 0 (root)
        Signal: 11 (SEGV)
     Timestamp: Tue 2024-02-20 16:31:07 MSK (15min ago)
  Command Line: /usr/bin/kvm -id 110 -name VM,debug-threads=on -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/110.qmp,server=on,wait=off -mon chardev=qmp,mode=control -chardev>
    Executable: /usr/bin/qemu-system-x86_64
 Control Group: /qemu.slice/110.scope
          Unit: 110.scope
         Slice: qemu.slice
       Boot ID: 5cfcd2d515a6425fa3880a61d8cd6bfc
    Machine ID: 6e4c2fe391324304a856baa8e6c88002
      Hostname: vdi1
       Storage: /var/lib/systemd/coredump/core.kvm.0.5cfcd2d515a6425fa3880a61d8cd6bfc.7899.1708435867000000.zst (present)
  Size on Disk: 1.4G
       Message: Process 7899 (kvm) of user 0 dumped core.

                Module libsystemd.so.0 from deb systemd-252.22-1~deb12u1.amd64
                Module libudev.so.1 from deb systemd-252.22-1~deb12u1.amd64
                Stack trace of thread 7935:
                #0  0x00007fb8883a8579 n/a (libc.so.6 + 0x97579)
                #1  0x00007fb8883aa6e2 __libc_calloc (libc.so.6 + 0x996e2)
                #2  0x00007fb889bed6d1 g_malloc0 (libglib-2.0.so.0 + 0x5a6d1)
                #3  0x00007fb88a2d50fc n/a (libspice-server.so.1 + 0x400fc)
                #4  0x00007fb88a2e7a2c n/a (libspice-server.so.1 + 0x52a2c)
                #5  0x00007fb88a2e7cb7 n/a (libspice-server.so.1 + 0x52cb7)
                #6  0x00007fb889be77a9 g_main_context_dispatch (libglib-2.0.so.0 + 0x547a9)
                #7  0x00007fb889be7a38 n/a (libglib-2.0.so.0 + 0x54a38)
                #8  0x00007fb889be7cef g_main_loop_run (libglib-2.0.so.0 + 0x54cef)
                #9  0x00007fb88a2e6fa9 n/a (libspice-server.so.1 + 0x51fa9)
                #10 0x00007fb88839a134 n/a (libc.so.6 + 0x89134)
                #11 0x00007fb88841a7dc n/a (libc.so.6 + 0x1097dc)

                Stack trace of thread 7899:
                #0  0x00007fb88840927f __write (libc.so.6 + 0xf827f)
                #1  0x000055c10a3f0fbc n/a (qemu-system-x86_64 + 0x8c2fbc)
                #2  0x000055c10a19c12e n/a (qemu-system-x86_64 + 0x66e12e)
                #3  0x000055c10a2cca9d n/a (qemu-system-x86_64 + 0x79ea9d)
                #4  0x000055c10a407cdb n/a (qemu-system-x86_64 + 0x8d9cdb)
                #5  0x00007fb8883629c0 n/a (libc.so.6 + 0x519c0)
                ELF object binary architecture: AMD x86-64
 
You can inspect the file with GDB. Install the debugger and QEMU debug symbols with apt install gdb pve-qemu-kvm-dbg. But the crash seems to happen in libspice-server.so.1 so it's best if you install those debug symbols as well for which you first need to add the debug repositories: https://wiki.debian.org/HowToGetABacktrace#Installing_the_debugging_symbols
The package for which you want the debug symbols is libspice-server1.

After you got those, you can run coredumpctl gdb -1 and then in GDB, please run thread apply all backtrace
 
Just more data, mine started screaming this into the logs last night. That PID is for the vm that keeps crashing. No crash yet though, so??

Feb 20 08:04:42 ukdah QEMU[2350804]: Resetting rate control (65559 frames)
Feb 20 08:04:44 ukdah QEMU[2350804]: Resetting rate control (65923 frames)
Feb 20 08:04:45 ukdah QEMU[2350804]: Resetting rate control (65954 frames)
Feb 20 08:04:47 ukdah QEMU[2350804]: Resetting rate control (65606 frames)
Feb 20 08:04:48 ukdah QEMU[2350804]: Resetting rate control (65961 frames)
Feb 20 08:04:49 ukdah QEMU[2350804]: Resetting rate control (65931 frames)
Feb 20 08:04:51 ukdah QEMU[2350804]: Resetting rate control (65940 frames)
Feb 20 08:04:52 ukdah QEMU[2350804]: Resetting rate control (65948 frames)
Feb 20 08:04:53 ukdah QEMU[2350804]: Resetting rate control (65937 frames)
Feb 20 08:04:55 ukdah QEMU[2350804]: Resetting rate control (65944 frames)
Feb 20 08:04:56 ukdah QEMU[2350804]: Resetting rate control (65939 frames)
Feb 20 08:04:57 ukdah QEMU[2350804]: Resetting rate control (65835 frames)
 
Code:
# apt install gdb pve-qemu-kvm-dbg
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Package pve-qemu-kvm-dbg is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
  pve-qemu-kvm-dbgsym

E: Package 'pve-qemu-kvm-dbg' has no installation candidate

I installed gdb and pve-qemu-kvm-dbgsym

Code:
# apt install libspice-server1
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
libspice-server1 is already the newest version (0.15.1-1).
0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
 
Last edited:
Code:
# coredumpctl gdb -1
           PID: 7899 (kvm)
           UID: 0 (root)
           GID: 0 (root)
        Signal: 11 (SEGV)
     Timestamp: Tue 2024-02-20 16:31:07 MSK (16h ago)
  Command Line: /usr/bin/kvm -id 110 -name VM,debug-threads=on -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/110.qmp,server=on,wait=off -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/110.pid -daemonize -smbios type=1,uuid=40b84057-6494-41d0-9215-d886261da783 -smp 4,sockets=1,cores=4,maxcpus=4 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/110.vnc,password=on -cpu host,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt -m 4096 -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device vmgenid,guid=c07e242e-0beb-4f3c-a49a-f9b083cb0a45 -device qemu-xhci,p2=15,p3=15,id=xhci,bus=pci.1,addr=0x1b -chardev spicevmc,id=usbredirchardev0,name=usbredir -device usb-redir,chardev=usbredirchardev0,id=usbredirdev0,bus=xhci.0,port=1 -chardev spicevmc,id=usbredirchardev1,name=usbredir -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1,bus=xhci.0,port=2 -device ich9-intel-hda,id=audiodev0,bus=pci.2,addr=0xc -device hda-micro,id=audiodev0-codec0,bus=audiodev0.0,cad=0,audiodev=spice-backend0 -device hda-duplex,id=audiodev0-codec1,bus=audiodev0.0,cad=1,audiodev=spice-backend0 -audiodev spice,id=spice-backend0 -device qxl-vga,id=vga,bus=pcie.0,addr=0x1 -chardev socket,path=/var/run/qemu-server/110.qga,server=on,wait=off,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -device qxl,id=vga1,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x18 -device virtio-serial,id=spice,bus=pci.0,addr=0x9 -chardev spicevmc,id=vdagent,name=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -spice tls-port=61003,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on -iscsi initiator-name=iqn.1993-08.org.debian:01:467eaab734a2 -drive file=/dev/vdi/vm-110-disk-0,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap110i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=BC:24:11:2E:C9:BA,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=256 -rtc driftfix=slew,base=localtime -machine hpet=off,type=pc-q35-8.1+pve0 -global kvm-pit.lost_tick_policy=discard
    Executable: /usr/bin/qemu-system-x86_64
 Control Group: /qemu.slice/110.scope
          Unit: 110.scope
         Slice: qemu.slice
       Boot ID: 5cfcd2d515a6425fa3880a61d8cd6bfc
    Machine ID: 6e4c2fe391324304a856baa8e6c88002
      Hostname: vdi1
       Storage: /var/lib/systemd/coredump/core.kvm.0.5cfcd2d515a6425fa3880a61d8cd6bfc.7899.1708435867000000.zst (present)
  Size on Disk: 1.4G
       Message: Process 7899 (kvm) of user 0 dumped core.

                Module libsystemd.so.0 from deb systemd-252.22-1~deb12u1.amd64
                Module libudev.so.1 from deb systemd-252.22-1~deb12u1.amd64
                Stack trace of thread 7935:
                #0  0x00007fb8883a8579 n/a (libc.so.6 + 0x97579)
                #1  0x00007fb8883aa6e2 __libc_calloc (libc.so.6 + 0x996e2)
                #2  0x00007fb889bed6d1 g_malloc0 (libglib-2.0.so.0 + 0x5a6d1)
                #3  0x00007fb88a2d50fc n/a (libspice-server.so.1 + 0x400fc)
                #4  0x00007fb88a2e7a2c n/a (libspice-server.so.1 + 0x52a2c)
                #5  0x00007fb88a2e7cb7 n/a (libspice-server.so.1 + 0x52cb7)
                #6  0x00007fb889be77a9 g_main_context_dispatch (libglib-2.0.so.0 + 0x547a9)
                #7  0x00007fb889be7a38 n/a (libglib-2.0.so.0 + 0x54a38)
                #8  0x00007fb889be7cef g_main_loop_run (libglib-2.0.so.0 + 0x54cef)
                #9  0x00007fb88a2e6fa9 n/a (libspice-server.so.1 + 0x51fa9)
                #10 0x00007fb88839a134 n/a (libc.so.6 + 0x89134)
                #11 0x00007fb88841a7dc n/a (libc.so.6 + 0x1097dc)

                Stack trace of thread 7899:
                #0  0x00007fb88840927f __write (libc.so.6 + 0xf827f)
                #1  0x000055c10a3f0fbc n/a (qemu-system-x86_64 + 0x8c2fbc)
                #2  0x000055c10a19c12e n/a (qemu-system-x86_64 + 0x66e12e)
                #3  0x000055c10a2cca9d n/a (qemu-system-x86_64 + 0x79ea9d)
                #4  0x000055c10a407cdb n/a (qemu-system-x86_64 + 0x8d9cdb)
                #5  0x00007fb8883629c0 n/a (libc.so.6 + 0x519c0)
                ELF object binary architecture: AMD x86-64

GNU gdb (Debian 13.1-3) 13.1
Copyright (C) 2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/bin/qemu-system-x86_64...
Reading symbols from /usr/lib/debug/.build-id/58/2fce0a1c73812938c636d834013df7a070044e.debug...

warning: Can't open file anon_inode:kvm-vcpu:3 which was expanded to anon_inode:kvm-vcpu:3 during file-backed mapping note processing

warning: Can't open file anon_inode:kvm-vcpu:2 which was expanded to anon_inode:kvm-vcpu:2 during file-backed mapping note processing

warning: Can't open file /[aio] (deleted) during file-backed mapping note processing

warning: Can't open file anon_inode:kvm-vcpu:1 which was expanded to anon_inode:kvm-vcpu:1 during file-backed mapping note processing

warning: Can't open file anon_inode:kvm-vcpu:0 which was expanded to anon_inode:kvm-vcpu:0 during file-backed mapping note processing

warning: Can't open file /dev/zero (deleted) during file-backed mapping note processing
[New LWP 7935]
[New LWP 7899]
[New LWP 7933]
[New LWP 7930]
[New LWP 7931]
[New LWP 7932]
[New LWP 7938]
[New LWP 7900]
[New LWP 7936]
[New LWP 1561872]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/bin/kvm -id 110 -name VM,debug-threads=on -no-shutdown -chardev'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  _int_malloc (av=av@entry=0x7fb758000030, bytes=bytes@entry=240) at ./malloc/malloc.c:4004
4004    ./malloc/malloc.c: No such file or directory.
[Current thread is 1 (Thread 0x7fb777dff6c0 (LWP 7935))]
 
Code:
(gdb) thread apply all backtrace

Thread 10 (Thread 0x7fb73bfff6c0 (LWP 1561872)):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7fb73bff9fe0, op=393, expected=0, futex_word=0x55c10bb64880) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c10bb64880, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x7fb73bff9fe0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007fb888396efb in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c10bb64880, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x7fb73bff9fe0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007fb88839983c in __pthread_cond_wait_common (abstime=0x7fb73bff9fe0, clockid=0, mutex=0x55c10bb647f0, cond=0x55c10bb64858) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_timedwait64 (cond=cond@entry=0x55c10bb64858, mutex=mutex@entry=0x55c10bb647f0, abstime=abstime@entry=0x7fb73bff9fe0) at ./nptl/pthread_cond_wait.c:643
#5  0x000055c10a3f2b31 in qemu_cond_timedwait_ts (cond=cond@entry=0x55c10bb64858, mutex=mutex@entry=0x55c10bb647f0, ts=ts@entry=0x7fb73bff9fe0, file=file@entry=0x55c10a64ef78 "../util/thread-pool.c", line=line@entry=90) at ../util/qemu-thread-posix.c:239
#6  0x000055c10a3f36d0 in qemu_cond_timedwait_impl (cond=0x55c10bb64858, mutex=0x55c10bb647f0, ms=<optimized out>, file=0x55c10a64ef78 "../util/thread-pool.c", line=90) at ../util/qemu-thread-posix.c:253
#7  0x000055c10a407f04 in worker_thread (opaque=opaque@entry=0x55c10bb647e0) at ../util/thread-pool.c:90
#8  0x000055c10a3f29a8 in qemu_thread_start (args=0x55c10c16f640) at ../util/qemu-thread-posix.c:541
#9  0x00007fb88839a134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007fb88841a7dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 9 (Thread 0x7fb7753ff6c0 (LWP 7936)):
#0  0x00007fb88840d15f in __GI___poll (fds=0x7fb74c027420, nfds=4, timeout=939) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007fb889be79ae in ?? () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#2  0x00007fb889be7cef in g_main_loop_run () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#3  0x00007fb88a2e6fa9 in ?? () from /lib/x86_64-linux-gnu/libspice-server.so.1
#4  0x00007fb88839a134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#5  0x00007fb88841a7dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 8 (Thread 0x7fb8852026c0 (LWP 7900)):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x000055c10a3f3b2a in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at ./include/qemu/futex.h:29
#2  qemu_event_wait (ev=ev@entry=0x55c10ad469c8 <rcu_call_ready_event>) at ../util/qemu-thread-posix.c:464
#3  0x000055c10a3fd432 in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:278
#4  0x000055c10a3f29a8 in qemu_thread_start (args=0x55c10b89a720) at ../util/qemu-thread-posix.c:541
#5  0x00007fb88839a134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007fb88841a7dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 7 (Thread 0x7fb75f3bf6c0 (LWP 7938)):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c10d92e67c) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c10d92e67c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007fb888396efb in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c10d92e67c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007fb888399558 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c10d92e688, cond=0x55c10d92e650) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c10d92e650, mutex=mutex@entry=0x55c10d92e688) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c10a3f34bb in qemu_cond_wait_impl (cond=0x55c10d92e650, mutex=0x55c10d92e688, file=0x55c10a4b7cf4 "../ui/vnc-jobs.c", line=248) at ../util/qemu-thread-posix.c:225
#6  0x000055c109e7ff0b in vnc_worker_thread_loop (queue=queue@entry=0x55c10d92e650) at ../ui/vnc-jobs.c:248
#7  0x000055c109e80ba8 in vnc_worker_thread (arg=arg@entry=0x55c10d92e650) at ../ui/vnc-jobs.c:362
#8  0x000055c10a3f29a8 in qemu_thread_start (args=0x55c10c134b90) at ../util/qemu-thread-posix.c:541
#9  0x00007fb88839a134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007fb88841a7dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 6 (Thread 0x7fb87e7ff6c0 (LWP 7932)):
#0  __GI___ioctl (fd=33, request=request@entry=44672) at ../sysdeps/unix/sysv/linux/ioctl.c:36
#1  0x000055c10a2596bf in kvm_vcpu_ioctl (cpu=cpu@entry=0x55c10bc86740, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3179
#2  0x000055c10a259b95 in kvm_cpu_exec (cpu=cpu@entry=0x55c10bc86740) at ../accel/kvm/kvm-all.c:2991
#3  0x000055c10a25b07d in kvm_vcpu_thread_fn (arg=arg@entry=0x55c10bc86740) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055c10a3f29a8 in qemu_thread_start (args=0x55c10bc8f5c0) at ../util/qemu-thread-posix.c:541
#5  0x00007fb88839a134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007fb88841a7dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

--Type <RET> for more, q to quit, c to continue without paging--
 
Code:
# apt install libspice-server1
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
libspice-server1 is already the newest version (0.15.1-1).
0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.
You need the debug package corresponding to this one. See: https://wiki.debian.org/HowToGetABacktrace#Installing_the_debugging_symbols

Alternatively, you can use: https://wiki.debian.org/HowToGetABacktrace#Automatically_loading_debugging_symbols_from_the_Internet

Code:
(gdb) thread apply all backtrace

Thread 10 (Thread 0x7fb73bfff6c0 (LWP 1561872)):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7fb73bff9fe0, op=393, expected=0, futex_word=0x55c10bb64880) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c10bb64880, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x7fb73bff9fe0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007fb888396efb in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c10bb64880, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x7fb73bff9fe0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007fb88839983c in __pthread_cond_wait_common (abstime=0x7fb73bff9fe0, clockid=0, mutex=0x55c10bb647f0, cond=0x55c10bb64858) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_timedwait64 (cond=cond@entry=0x55c10bb64858, mutex=mutex@entry=0x55c10bb647f0, abstime=abstime@entry=0x7fb73bff9fe0) at ./nptl/pthread_cond_wait.c:643
#5  0x000055c10a3f2b31 in qemu_cond_timedwait_ts (cond=cond@entry=0x55c10bb64858, mutex=mutex@entry=0x55c10bb647f0, ts=ts@entry=0x7fb73bff9fe0, file=file@entry=0x55c10a64ef78 "../util/thread-pool.c", line=line@entry=90) at ../util/qemu-thread-posix.c:239
#6  0x000055c10a3f36d0 in qemu_cond_timedwait_impl (cond=0x55c10bb64858, mutex=0x55c10bb647f0, ms=<optimized out>, file=0x55c10a64ef78 "../util/thread-pool.c", line=90) at ../util/qemu-thread-posix.c:253
#7  0x000055c10a407f04 in worker_thread (opaque=opaque@entry=0x55c10bb647e0) at ../util/thread-pool.c:90
#8  0x000055c10a3f29a8 in qemu_thread_start (args=0x55c10c16f640) at ../util/qemu-thread-posix.c:541
#9  0x00007fb88839a134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007fb88841a7dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 9 (Thread 0x7fb7753ff6c0 (LWP 7936)):
#0  0x00007fb88840d15f in __GI___poll (fds=0x7fb74c027420, nfds=4, timeout=939) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007fb889be79ae in ?? () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#2  0x00007fb889be7cef in g_main_loop_run () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#3  0x00007fb88a2e6fa9 in ?? () from /lib/x86_64-linux-gnu/libspice-server.so.1
#4  0x00007fb88839a134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#5  0x00007fb88841a7dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 8 (Thread 0x7fb8852026c0 (LWP 7900)):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x000055c10a3f3b2a in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at ./include/qemu/futex.h:29
#2  qemu_event_wait (ev=ev@entry=0x55c10ad469c8 <rcu_call_ready_event>) at ../util/qemu-thread-posix.c:464
#3  0x000055c10a3fd432 in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:278
#4  0x000055c10a3f29a8 in qemu_thread_start (args=0x55c10b89a720) at ../util/qemu-thread-posix.c:541
#5  0x00007fb88839a134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007fb88841a7dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 7 (Thread 0x7fb75f3bf6c0 (LWP 7938)):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x55c10d92e67c) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x55c10d92e67c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007fb888396efb in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x55c10d92e67c, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007fb888399558 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c10d92e688, cond=0x55c10d92e650) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x55c10d92e650, mutex=mutex@entry=0x55c10d92e688) at ./nptl/pthread_cond_wait.c:618
#5  0x000055c10a3f34bb in qemu_cond_wait_impl (cond=0x55c10d92e650, mutex=0x55c10d92e688, file=0x55c10a4b7cf4 "../ui/vnc-jobs.c", line=248) at ../util/qemu-thread-posix.c:225
#6  0x000055c109e7ff0b in vnc_worker_thread_loop (queue=queue@entry=0x55c10d92e650) at ../ui/vnc-jobs.c:248
#7  0x000055c109e80ba8 in vnc_worker_thread (arg=arg@entry=0x55c10d92e650) at ../ui/vnc-jobs.c:362
#8  0x000055c10a3f29a8 in qemu_thread_start (args=0x55c10c134b90) at ../util/qemu-thread-posix.c:541
#9  0x00007fb88839a134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007fb88841a7dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 6 (Thread 0x7fb87e7ff6c0 (LWP 7932)):
#0  __GI___ioctl (fd=33, request=request@entry=44672) at ../sysdeps/unix/sysv/linux/ioctl.c:36
#1  0x000055c10a2596bf in kvm_vcpu_ioctl (cpu=cpu@entry=0x55c10bc86740, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3179
#2  0x000055c10a259b95 in kvm_cpu_exec (cpu=cpu@entry=0x55c10bc86740) at ../accel/kvm/kvm-all.c:2991
#3  0x000055c10a25b07d in kvm_vcpu_thread_fn (arg=arg@entry=0x55c10bc86740) at ../accel/kvm/kvm-accel-ops.c:51
#4  0x000055c10a3f29a8 in qemu_thread_start (args=0x55c10bc8f5c0) at ../util/qemu-thread-posix.c:541
#5  0x00007fb88839a134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007fb88841a7dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

--Type <RET> for more, q to quit, c to continue without paging--
Unfortunately, these are not the threads that ran into the segfault. You need to type return or c to get more or the full output.
 
Just more data, mine started screaming this into the logs last night. That PID is for the vm that keeps crashing. No crash yet though, so??

Feb 20 08:04:42 ukdah QEMU[2350804]: Resetting rate control (65559 frames)
Feb 20 08:04:44 ukdah QEMU[2350804]: Resetting rate control (65923 frames)
Feb 20 08:04:45 ukdah QEMU[2350804]: Resetting rate control (65954 frames)
Feb 20 08:04:47 ukdah QEMU[2350804]: Resetting rate control (65606 frames)
Feb 20 08:04:48 ukdah QEMU[2350804]: Resetting rate control (65961 frames)
Feb 20 08:04:49 ukdah QEMU[2350804]: Resetting rate control (65931 frames)
Feb 20 08:04:51 ukdah QEMU[2350804]: Resetting rate control (65940 frames)
Feb 20 08:04:52 ukdah QEMU[2350804]: Resetting rate control (65948 frames)
Feb 20 08:04:53 ukdah QEMU[2350804]: Resetting rate control (65937 frames)
Feb 20 08:04:55 ukdah QEMU[2350804]: Resetting rate control (65944 frames)
Feb 20 08:04:56 ukdah QEMU[2350804]: Resetting rate control (65939 frames)
Feb 20 08:04:57 ukdah QEMU[2350804]: Resetting rate control (65835 frames)
This message comes from the audio subsystem in QEMU: https://git.proxmox.com/?p=mirror_q...e5a8bb22368b3555644cb2debd3df24592f3a21#l2295

Might be a hint, but it's not certain that it's related to the crashes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!