Proxmox VE 8.2.4 random guest freezes

WoodBench

New Member
Jul 16, 2024
21
6
3
After upgrading to Proxmox 8 with a clean install and creating a guest VM I experience random (~24-48h) freezes of the guest VM i.e. no ping/ssh and VNC show unresponsive login screen.

I can't find any clues in the logs. Last freeze today at 12:28:

Code:
Jul 16 12:10:03 docker1 systemd[1]: sysstat-collect.service: Deactivated successfully.
Jul 16 12:10:03 docker1 systemd[1]: Finished sysstat-collect.service - system activity accounting tool.
Jul 16 12:10:33 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-36350f71e6b5c785c94d18d3d232c148d9a8aa6d18f7137d70e44d044a73822d-runc.Buk47a.mount: Deactivated successfully.
Jul 16 12:10:38 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-03424995bdff11a5d3a09e322df752f0b97a131ab95454cce4075f2517cdafe7-runc.rY7tAt.mount: Deactivated successfully.
Jul 16 12:10:53 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-af01baf74766168d567454c6f63b5f4e0093ed5a27a38c5f57050fc379783d9d-runc.1399Sv.mount: Deactivated successfully.
Jul 16 12:10:54 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-1f856eb722c23c6b5b8b70ef7f34c4be7ba19e1f8de103054ea6ce2fca856745-runc.pN1gYN.mount: Deactivated successfully.
Jul 16 12:10:59 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-331b34a9b9725ed1ec555e8e47ddedbddafae8c8f0755f8a2e447fd77150c6ef-runc.FrOsAA.mount: Deactivated successfully.
Jul 16 12:11:38 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-03424995bdff11a5d3a09e322df752f0b97a131ab95454cce4075f2517cdafe7-runc.e44mSq.mount: Deactivated successfully.
Jul 16 12:11:55 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-1f856eb722c23c6b5b8b70ef7f34c4be7ba19e1f8de103054ea6ce2fca856745-runc.ya6wTh.mount: Deactivated successfully.
Jul 16 12:12:25 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-1f856eb722c23c6b5b8b70ef7f34c4be7ba19e1f8de103054ea6ce2fca856745-runc.TeUTgg.mount: Deactivated successfully.
Jul 16 12:12:33 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-36350f71e6b5c785c94d18d3d232c148d9a8aa6d18f7137d70e44d044a73822d-runc.G4nHmB.mount: Deactivated successfully.
Jul 16 12:12:54 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-af01baf74766168d567454c6f63b5f4e0093ed5a27a38c5f57050fc379783d9d-runc.LZ9x9M.mount: Deactivated successfully.
Jul 16 12:15:01 docker1 CRON[1421295]: pam_unix(cron:session): session opened for user root(uid=0) by root(uid=0)
Jul 16 12:15:01 docker1 CRON[1421296]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Jul 16 12:15:01 docker1 CRON[1421295]: pam_unix(cron:session): session closed for user root
Jul 16 12:17:01 docker1 CRON[1422494]: pam_unix(cron:session): session opened for user root(uid=0) by root(uid=0)
Jul 16 12:17:01 docker1 CRON[1422495]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Jul 16 12:17:01 docker1 CRON[1422494]: pam_unix(cron:session): session closed for user root
Jul 16 12:20:00 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-331b34a9b9725ed1ec555e8e47ddedbddafae8c8f0755f8a2e447fd77150c6ef-runc.uquV5A.mount: Deactivated successfully.
Jul 16 12:20:00 docker1 systemd[1]: Starting sysstat-collect.service - system activity accounting tool...
Jul 16 12:20:01 docker1 systemd[1]: sysstat-collect.service: Deactivated successfully.
Jul 16 12:20:01 docker1 systemd[1]: Finished sysstat-collect.service - system activity accounting tool.
Jul 16 12:20:04 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-36350f71e6b5c785c94d18d3d232c148d9a8aa6d18f7137d70e44d044a73822d-runc.EKKmsy.mount: Deactivated successfully.
Jul 16 12:20:22 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-3ba4a821a7ad09de62a2017ad64174d52202ca82bd5f66566e7f15085c29d2e4-runc.NVEBxx.mount: Deactivated successfully.
Jul 16 12:20:25 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-af01baf74766168d567454c6f63b5f4e0093ed5a27a38c5f57050fc379783d9d-runc.MyibSb.mount: Deactivated successfully.
Jul 16 12:20:34 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-36350f71e6b5c785c94d18d3d232c148d9a8aa6d18f7137d70e44d044a73822d-runc.zqy13Q.mount: Deactivated successfully.
Jul 16 12:20:52 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-3ba4a821a7ad09de62a2017ad64174d52202ca82bd5f66566e7f15085c29d2e4-runc.fkTveP.mount: Deactivated successfully.
Jul 16 12:20:55 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-b6fe25efb13af17dfd87cc94cb010b645cbd83b30668fdb295a809ab7cd79f2e-runc.Ojz6Al.mount: Deactivated successfully.
Jul 16 12:20:55 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-af01baf74766168d567454c6f63b5f4e0093ed5a27a38c5f57050fc379783d9d-runc.dbNuBv.mount: Deactivated successfully.
Jul 16 12:20:56 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-c92bc8bba14ed88368afb275cf9fe1f7b385d03ba15d8b24c2e07f7801779a97-runc.XyNLyL.mount: Deactivated successfully.
Jul 16 12:21:22 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-3ba4a821a7ad09de62a2017ad64174d52202ca82bd5f66566e7f15085c29d2e4-runc.3C4VZ5.mount: Deactivated successfully.
Jul 16 12:21:26 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-1f856eb722c23c6b5b8b70ef7f34c4be7ba19e1f8de103054ea6ce2fca856745-runc.idebEI.mount: Deactivated successfully.
Jul 16 12:21:34 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-36350f71e6b5c785c94d18d3d232c148d9a8aa6d18f7137d70e44d044a73822d-runc.eaij6n.mount: Deactivated successfully.
Jul 16 12:21:55 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-af01baf74766168d567454c6f63b5f4e0093ed5a27a38c5f57050fc379783d9d-runc.geocdW.mount: Deactivated successfully.
Jul 16 12:21:55 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-c92bc8bba14ed88368afb275cf9fe1f7b385d03ba15d8b24c2e07f7801779a97-runc.Ub8UoO.mount: Deactivated successfully.
-- Boot e8b5ce55ec854244a122138fa8dfbb97 --
Jul 16 15:28:22 docker1 kernel: Linux version 6.8.0-38-generic (buildd@lcy02-amd64-049) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.2.0-23ubuntu4) 13.2.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #38-Ubuntu SMP PREEMPT_DYNAMIC Fri Jun  7 15:25:01 UTC 2024 (Ubuntu 6.8.0-38.38-generic 6.8.8)
Jul 16 15:28:22 docker1 kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.8.0-38-generic root=UUID=f90df74e-31c6-4338-b11e-6a796b24ce10 ro
Code:
Jul 16 12:17:01 pve1 CRON[351640]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jul 16 12:17:01 pve1 CRON[351641]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Jul 16 12:17:01 pve1 CRON[351640]: pam_unix(cron:session): session closed for user root
Jul 16 12:51:53 pve1 smartd[535]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 65 to 66
Jul 16 12:51:53 pve1 smartd[535]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 35 to 34
Jul 16 12:51:58 pve1 smartd[535]: Device: /dev/sdb [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 67 to 68
Jul 16 12:51:58 pve1 smartd[535]: Device: /dev/sdb [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 33 to 32
Jul 16 13:17:01 pve1 CRON[360394]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jul 16 13:17:01 pve1 CRON[360395]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Jul 16 13:17:01 pve1 CRON[360394]: pam_unix(cron:session): session closed for user root
Jul 16 13:21:53 pve1 smartd[535]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 66 to 67
Code:
agent: 1
boot: order=scsi0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 49152
meta: creation-qemu=9.0.0,ctime=1720434262
name: docker1
net0: virtio=BC:24:11:2D:A5:70,bridge=vmbr0,firewall=1
net1: virtio=BC:24:11:06:0B:89,bridge=vmbr1,firewall=1
numa: 0
ostype: l26
scsi0: cryptssd:100/vm-100-disk-0.raw,iothread=1,size=256G
scsihw: virtio-scsi-single
smbios1: uuid=31dd0bf4-86fc-4c95-8161-69248bc968e5
sockets: 1
vmgenid: 0a0b9edb-48df-4422-9d39-e1c3f3d00401
proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.8-2
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve: 6.5.13-5
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
System:
Kernel: 6.8.8-2-pve arch: x86_64 bits: 64 compiler: gcc v: 12.2.0 Console: pty pts/1
Distro: Debian GNU/Linux 12 (bookworm)
Machine:
Type: Desktop Mobo: HARDKERNEL model: ODROID-H3 v: 1.0 serial: N/A UEFI: American Megatrends
v: 5.19 date: 07/19/2023
CPU:
Info: quad core model: Intel Pentium Silver N6005 bits: 64 type: MCP arch: Alder Lake rev: 0
cache: L1: 256 KiB L2: 1.5 MiB L3: 4 MiB
Speed (MHz): avg: 2675 high: 3300 min/max: 800/3300 cores: 1: 3300 2: 3300 3: 3300 4: 800
bogomips: 15974
Flags: ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
Graphics:
Device-1: Intel JasperLake [UHD Graphics] driver: i915 v: kernel arch: Gen-11 ports:
active: HDMI-A-2 empty: DP-1,HDMI-A-1 bus-ID: 00:02.0 chip-ID: 8086:4e71
Display: server: No display server data found. Headless machine? tty: 145x37
Monitor-1: HDMI-A-2 model: HP L1940T res: 1280x1024 dpi: 86 diag: 484mm (19.1")
API: OpenGL Message: GL data unavailable in console for root.
Audio:
Device-1: Intel Jasper Lake HD Audio driver: snd_hda_intel v: kernel bus-ID: 00:1f.3
chip-ID: 8086:4dc8
API: ALSA v: k6.8.8-2-pve status: kernel-api
Network:
Device-1: Realtek RTL8125 2.5GbE driver: r8169 v: kernel pcie: speed: 5 GT/s lanes: 1 port: 4000
bus-ID: 01:00.0 chip-ID: 10ec:8125
IF: enp1s0 state: up speed: 1000 Mbps duplex: full mac: <filter>
Device-2: Realtek RTL8125 2.5GbE driver: r8169 v: kernel pcie: speed: 5 GT/s lanes: 1
port: 3000 bus-ID: 02:00.0 chip-ID: 10ec:8125
IF: enp2s0 state: down mac: <filter>
IF-ID-1: bonding_masters state: N/A speed: N/A duplex: N/A mac: N/A
IF-ID-2: fwbr100i0 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
IF-ID-3: fwbr100i1 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
IF-ID-4: fwln100i0 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-5: fwln100i1 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-6: fwpr100p0 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-7: fwpr100p1 state: up speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-8: tap100i0 state: unknown speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-9: tap100i1 state: unknown speed: 10000 Mbps duplex: full mac: <filter>
IF-ID-10: vmbr0 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
IF-ID-11: vmbr1 state: up speed: 10000 Mbps duplex: unknown mac: <filter>
Drives:
Local Storage: total: 30.02 TiB used: 3.54 TiB (11.8%)
ID-1: /dev/nvme0n1 vendor: Kingston model: SNV2S1000G size: 931.51 GiB speed: 63.2 Gb/s
lanes: 4 serial: <filter> temp: 41.9 C
ID-2: /dev/sda vendor: Seagate model: ST16000NM001G-2KK103 size: 14.55 TiB speed: 6.0 Gb/s
serial: <filter>
ID-3: /dev/sdb vendor: Seagate model: ST16000NM001G-2KK103 size: 14.55 TiB speed: 6.0 Gb/s
serial: <filter>
Partition:
ID-1: / size: 63 GiB used: 3.83 GiB (6.1%) fs: btrfs dev: /dev/nvme0n1p3
ID-2: /boot/efi size: 1022 MiB used: 11.6 MiB (1.1%) fs: vfat dev: /dev/nvme0n1p2
Swap:
Alert: No swap data was found.
Sensors:
System Temperatures: cpu: 61.0 C mobo: N/A
Fan Speeds (RPM): N/A
Info:
Processes: 275 Uptime: 4h 40m Memory: 62.65 GiB used: 10.95 GiB (17.5%) Init: systemd v: 252
target: graphical (5) default: graphical Compilers: N/A Packages: pm: dpkg pkgs: 813 Shell: Sudo
v: 1.9.13p3 running-in: pty pts/1 inxi: 3.3.26

More information:
- The host is not affected
- I can sudo qm stop 100 and then start the VM again
- The guest VM is Ubuntu 24.04 used for running docker containers
- No previous problems running a similar container (Ubuntu 22.04) with many docker containers on Proxmox VE 7

Stuff I've tried:
- Pinning kernel 6.5.13-5-pve
- Looking for these specific error messages kvm: Desc next is 3 and Reset to device, \Device\RaidPort4, was issued in relation to this post and this github issue (iothread and VirtIO SCSI). Didn't find the error messages in any of my logs.
- Specifying nfsver=3 in the guest NFS mounts
- run-docker-runtime\x2drunc-moby messages are apparantly normal.

Any ideas?
 
And it happened again @09:35

Code:
Jul 18 09:17:01 pve1 CRON[378424]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jul 18 09:17:01 pve1 CRON[378425]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Jul 18 09:17:01 pve1 CRON[378424]: pam_unix(cron:session): session closed for user root
Jul 18 10:17:01 pve1 CRON[387289]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jul 18 10:17:01 pve1 CRON[387290]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Jul 18 10:17:01 pve1 CRON[387289]: pam_unix(cron:session): session closed for user root
Jul 18 10:18:26 pve1 pvestatd[911]: auth key pair too old, rotating..
Jul 18 10:21:15 pve1 smartd[545]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 66 to 67
Jul 18 10:21:15 pve1 smartd[545]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 34 to 33

Code:
Jul 18 09:00:04 docker1 systemd[1]: Starting sysstat-collect.service - system activity accounting tool...
Jul 18 09:32:43 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-3ba4a821a7ad09de62a2017ad64174d52202ca82bd5f66566e7f15085c29d2e4-runc.wGicXA.mount: Deactivated successfully.
Jul 18 09:33:00 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-b6fe25efb13af17dfd87cc94cb010b645cbd83b30668fdb295a809ab7cd79f2e-runc.88LEWl.mount: Deactivated successfully.
Jul 18 09:33:15 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-af01baf74766168d567454c6f63b5f4e0093ed5a27a38c5f57050fc379783d9d-runc.8j0rbu.mount: Deactivated successfully.
Jul 18 09:34:41 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-1f856eb722c23c6b5b8b70ef7f34c4be7ba19e1f8de103054ea6ce2fca856745-runc.1d2lBi.mount: Deactivated successfully.
Jul 18 09:35:01 docker1 CRON[1390700]: pam_unix(cron:session): session opened for user root(uid=0) by root(uid=0)
Jul 18 09:35:01 docker1 CRON[1390701]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Jul 18 09:35:01 docker1 CRON[1390700]: pam_unix(cron:session): session closed for user root
Jul 18 09:35:30 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-b6fe25efb13af17dfd87cc94cb010b645cbd83b30668fdb295a809ab7cd79f2e-runc.ID3ITF.mount: Deactivated successfully.
Jul 18 09:35:55 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-7e39ff150b3609c9ec4d742b32e68bf8f33d9be8ae198135393a98304e66ceac-runc.O9D5XP.mount: Deactivated successfully.
-- Boot 69e56c9e039d4b9c8ea38590dad149c3 --
Jul 18 10:27:47 docker1 kernel: Linux version 6.8.0-38-generic (buildd@lcy02-amd64-049) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.2.0-23ubuntu4) 13.2.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #38-Ubuntu SMP PREEMPT_DYNAMIC Fri Jun  7 15:25:01 UTC 2024 (Ubuntu 6.8.0-38.38-generic 6.8.8)
Jul 18 10:27:47 docker1 kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.8.0-38-generic root=UUID=f90df74e-31c6-4338-b11e-6a796b24ce10 ro
----The rest of the boot up messages removed because the post is too long---
 
Hi,
what is the output of qm status 100 --verbose after the freeze happens? Can you correlate the freezes with any other operation, e.g. backup (see the Task History of the VM if you are not sure).

You can also use apt install pve-qemu-kvm-dbgsym gdb to install the relevant debug symbols and debugger. And after the freeze, obtain a stack trace running gdb --batch --ex 't a a bt' -p $(cat /var/run/qemu-server/100.pid) If the issue is related to QEMU, that might contain a hint.
 
Hi,
what is the output of qm status 100 --verbose after the freeze happens? Can you correlate the freezes with any other operation, e.g. backup (see the Task History of the VM if you are not sure).

You can also use apt install pve-qemu-kvm-dbgsym gdb to install the relevant debug symbols and debugger. And after the freeze, obtain a stack trace running gdb --batch --ex 't a a bt' -p $(cat /var/run/qemu-server/100.pid) If the issue is related to QEMU, that might contain a hint.
@fiona: Thank you for replying.
I'll try both commands and post the results when the next freeze happens.
I don't have any scheduled operations with this VM (like backups):

Screenshot_20240718_130714.png
 
Hi,
what is the output of qm status 100 --verbose after the freeze happens? Can you correlate the freezes with any other operation, e.g. backup (see the Task History of the VM if you are not sure).

You can also use apt install pve-qemu-kvm-dbgsym gdb to install the relevant debug symbols and debugger. And after the freeze, obtain a stack trace running gdb --batch --ex 't a a bt' -p $(cat /var/run/qemu-server/100.pid) If the issue is related to QEMU, that might contain a hint.

@fiona Another freeze just happened:

Code:
balloon: 51539607552
ballooninfo:
    actual: 51539607552
    free_mem: 2048950272
    last_update: 1722156610
    major_page_faults: 2576
    max_mem: 51539607552
    mem_swapped_in: 0
    mem_swapped_out: 8192
    minor_page_faults: 1732025724
    total_mem: 50517409792
blockstat:
    ide2:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        failed_zone_append_operations: 0
        flush_operations: 0
        flush_total_time_ns: 0
        idle_time_ns: 103915944015755
        invalid_flush_operations: 0
        invalid_rd_operations: 0
        invalid_unmap_operations: 0
        invalid_wr_operations: 0
        invalid_zone_append_operations: 0
        rd_bytes: 46
        rd_merged: 0
        rd_operations: 2
        rd_total_time_ns: 33612
        timed_stats:
        unmap_bytes: 0
        unmap_merged: 0
        unmap_operations: 0
        unmap_total_time_ns: 0
        wr_bytes: 0
        wr_highest_offset: 0
        wr_merged: 0
        wr_operations: 0
        wr_total_time_ns: 0
        zone_append_bytes: 0
        zone_append_merged: 0
        zone_append_operations: 0
        zone_append_total_time_ns: 0
    scsi0:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        failed_zone_append_operations: 0
        flush_operations: 2658104
        flush_total_time_ns: 4513061685828
        idle_time_ns: 6335007045990
        invalid_flush_operations: 0
        invalid_rd_operations: 0
        invalid_unmap_operations: 0
        invalid_wr_operations: 0
        invalid_zone_append_operations: 0
        rd_bytes: 8336631296
        rd_merged: 0
        rd_operations: 217679
        rd_total_time_ns: 495701611341
        timed_stats:
        unmap_bytes: 207416578048
        unmap_merged: 0
        unmap_operations: 390552
        unmap_total_time_ns: 3069077572
        wr_bytes: 386046562304
        wr_highest_offset: 39520780288
        wr_merged: 0
        wr_operations: 16283707
        wr_total_time_ns: 3083594411552
        zone_append_bytes: 0
        zone_append_merged: 0
        zone_append_operations: 0
        zone_append_total_time_ns: 0
cpus: 4
disk: 0
diskread: 8336631342
diskwrite: 386046562304
freemem: 2048950272
maxdisk: 274877906944
maxmem: 51539607552
mem: 48468459520
name: docker1
netin: 535870501170
netout: 26860408317
nics:
    tap100i0:
        netin: 492596630674
        netout: 5637770285
    tap100i1:
        netin: 43273870496
        netout: 21222638032
pid: 1399518
proxmox-support:
    backup-fleecing: 1
    backup-max-workers: 1
    pbs-dirty-bitmap: 1
    pbs-dirty-bitmap-migration: 1
    pbs-dirty-bitmap-savevm: 1
    pbs-library-version: 1.4.1 (UNKNOWN)
    pbs-masterkey: 1
    query-bitmap-info: 1
qmpstatus: running
running-machine: pc-i440fx-9.0+pve0
running-qemu: 9.0.0
status: running
uptime: 103925
vmid: 100
Code:
[New LWP 1399519]
[New LWP 1399520]
[New LWP 1399563]
[New LWP 1399595]
[New LWP 1399596]
[New LWP 1399597]
[New LWP 1399598]
[New LWP 1399599]
[New LWP 1399601]
[New LWP 1667962]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007708fc755256 in __ppoll (fds=0x57e19f5788c0, nfds=77, timeout=<optimized out>, timeout@entry=0x7ffeb4c19c90, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:42
42    ../sysdeps/unix/sysv/linux/ppoll.c: No such file or directory.

Thread 11 (Thread 0x7708f8c006c0 (LWP 1667962) "iou-wrk-1399520"):
#0  0x0000000000000000 in ?? ()
Backtrace stopped: Cannot access memory at address 0x0

Thread 10 (Thread 0x76fce56006c0 (LWP 1399601) "vnc_worker"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x57e1a0079008) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x57e1a0079008, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007708fc6deefb in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x57e1a0079008, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007708fc6e1558 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x57e1a0079018, cond=0x57e1a0078fe0) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=cond@entry=0x57e1a0078fe0, mutex=mutex@entry=0x57e1a0079018) at ./nptl/pthread_cond_wait.c:618
#5  0x000057e19c58ebbb in qemu_cond_wait_impl (cond=0x57e1a0078fe0, mutex=0x57e1a0079018, file=0x57e19c6431d4 "../ui/vnc-jobs.c", line=248) at ../util/qemu-thread-posix.c:225
#6  0x000057e19bfa916b in vnc_worker_thread_loop (queue=queue@entry=0x57e1a0078fe0) at ../ui/vnc-jobs.c:248
#7  0x000057e19bfa9e48 in vnc_worker_thread (arg=arg@entry=0x57e1a0078fe0) at ../ui/vnc-jobs.c:362
#8  0x000057e19c58dfc8 in qemu_thread_start (args=0x57e19f84c720) at ../util/qemu-thread-posix.c:541
#9  0x00007708fc6e2134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#10 0x00007708fc7627dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 9 (Thread 0x7708f1c006c0 (LWP 1399599) "CPU 3/KVM"):
#0  __GI___ioctl (fd=36, request=request@entry=44672) at ../sysdeps/unix/sysv/linux/ioctl.c:36
#1  0x000057e19c3d5a09 in kvm_vcpu_ioctl (cpu=cpu@entry=0x57e19ed7c360, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3052
#2  0x000057e19c3d5f51 in kvm_cpu_exec (cpu=cpu@entry=0x57e19ed7c360) at ../accel/kvm/kvm-all.c:2869
#3  0x000057e19c3d7795 in kvm_vcpu_thread_fn (arg=arg@entry=0x57e19ed7c360) at ../accel/kvm/kvm-accel-ops.c:50
#4  0x000057e19c58dfc8 in qemu_thread_start (args=0x57e19ed85400) at ../util/qemu-thread-posix.c:541
#5  0x00007708fc6e2134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007708fc7627dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 8 (Thread 0x7708f26006c0 (LWP 1399598) "CPU 2/KVM"):
#0  __GI___ioctl (fd=34, request=request@entry=44672) at ../sysdeps/unix/sysv/linux/ioctl.c:36
#1  0x000057e19c3d5a09 in kvm_vcpu_ioctl (cpu=cpu@entry=0x57e19ed72ae0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3052
#2  0x000057e19c3d5f51 in kvm_cpu_exec (cpu=cpu@entry=0x57e19ed72ae0) at ../accel/kvm/kvm-all.c:2869
#3  0x000057e19c3d7795 in kvm_vcpu_thread_fn (arg=arg@entry=0x57e19ed72ae0) at ../accel/kvm/kvm-accel-ops.c:50
#4  0x000057e19c58dfc8 in qemu_thread_start (args=0x57e19ed7b940) at ../util/qemu-thread-posix.c:541
#5  0x00007708fc6e2134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007708fc7627dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 7 (Thread 0x7708f30006c0 (LWP 1399597) "CPU 1/KVM"):
#0  __GI___ioctl (fd=32, request=request@entry=44672) at ../sysdeps/unix/sysv/linux/ioctl.c:36
#1  0x000057e19c3d5a09 in kvm_vcpu_ioctl (cpu=cpu@entry=0x57e19ed690a0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3052
#2  0x000057e19c3d5f51 in kvm_cpu_exec (cpu=cpu@entry=0x57e19ed690a0) at ../accel/kvm/kvm-all.c:2869
#3  0x000057e19c3d7795 in kvm_vcpu_thread_fn (arg=arg@entry=0x57e19ed690a0) at ../accel/kvm/kvm-accel-ops.c:50
#4  0x000057e19c58dfc8 in qemu_thread_start (args=0x57e19ed720c0) at ../util/qemu-thread-posix.c:541
#5  0x00007708fc6e2134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007708fc7627dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 6 (Thread 0x7708f3e006c0 (LWP 1399596) "CPU 0/KVM"):
#0  __GI___ioctl (fd=30, request=request@entry=44672) at ../sysdeps/unix/sysv/linux/ioctl.c:36
#1  0x000057e19c3d5a09 in kvm_vcpu_ioctl (cpu=cpu@entry=0x57e19ed38840, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3052
#2  0x000057e19c3d5f51 in kvm_cpu_exec (cpu=cpu@entry=0x57e19ed38840) at ../accel/kvm/kvm-all.c:2869
#3  0x000057e19c3d7795 in kvm_vcpu_thread_fn (arg=arg@entry=0x57e19ed38840) at ../accel/kvm/kvm-accel-ops.c:50
#4  0x000057e19c58dfc8 in qemu_thread_start (args=0x57e19e5fe3f0) at ../util/qemu-thread-posix.c:541
#5  0x00007708fc6e2134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007708fc7627dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 5 (Thread 0x7708f9c64480 (LWP 1399595) "vhost-1399518"):
#0  0x0000000000000000 in ?? ()
Backtrace stopped: Cannot access memory at address 0x0

Thread 4 (Thread 0x7708f9c64480 (LWP 1399563) "vhost-1399518"):
#0  0x0000000000000000 in ?? ()
Backtrace stopped: Cannot access memory at address 0x0

Thread 3 (Thread 0x7708f8c006c0 (LWP 1399520) "kvm"):
#0  0x00007708fc755256 in __ppoll (fds=0x7708ec0030c0, nfds=8, timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:42
#1  0x000057e19c5a69dd in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/poll2.h:64
#2  0x000057e19c58bae9 in fdmon_poll_wait (ctx=0x57e19e880890, ready_list=0x7708f8bfb088, timeout=-1) at ../util/fdmon-poll.c:79
#3  0x000057e19c58af8d in aio_poll (ctx=0x57e19e880890, blocking=blocking@entry=true) at ../util/aio-posix.c:670
#4  0x000057e19c41f356 in iothread_run (opaque=opaque@entry=0x57e19e657300) at ../iothread.c:63
#5  0x000057e19c58dfc8 in qemu_thread_start (args=0x57e19e880ee0) at ../util/qemu-thread-posix.c:541
#6  0x00007708fc6e2134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#7  0x00007708fc7627dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 2 (Thread 0x7708f96006c0 (LWP 1399519) "call_rcu"):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x000057e19c58f2ca in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at ./include/qemu/futex.h:29
#2  qemu_event_wait (ev=ev@entry=0x57e19d476548 <rcu_call_ready_event>) at ../util/qemu-thread-posix.c:464
#3  0x000057e19c59a222 in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:278
#4  0x000057e19c58dfc8 in qemu_thread_start (args=0x57e19e6028c0) at ../util/qemu-thread-posix.c:541
#5  0x00007708fc6e2134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007708fc7627dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 1 (Thread 0x7708f9c64480 (LWP 1399518) "kvm"):
#0  0x00007708fc755256 in __ppoll (fds=0x57e19f5788c0, nfds=77, timeout=<optimized out>, timeout@entry=0x7ffeb4c19c90, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:42
#1  0x000057e19c5a697e in ppoll (__ss=0x0, __timeout=0x7ffeb4c19c90, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/poll2.h:64
#2  qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=timeout@entry=2999542332) at ../util/qemu-timer.c:351
#3  0x000057e19c5a3dae in os_host_main_loop_wait (timeout=2999542332) at ../util/main-loop.c:305
#4  main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:589
#5  0x000057e19c1be1a9 in qemu_main_loop () at ../system/runstate.c:783
#6  0x000057e19c3e1226 in qemu_default_main () at ../system/main.c:37
#7  0x00007708fc68024a in __libc_start_call_main (main=main@entry=0x57e19bf5c8e0 <main>, argc=argc@entry=76, argv=argv@entry=0x7ffeb4c19ea8) at ../sysdeps/nptl/libc_start_call_main.h:58
#8  0x00007708fc680305 in __libc_start_main_impl (main=0x57e19bf5c8e0 <main>, argc=76, argv=0x7ffeb4c19ea8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffeb4c19e98) at ../csu/libc-start.c:360
#9  0x000057e19bf5e621 in _start ()
[Inferior 1 (process 1399518) detached]
 
Last edited:
Freeze @ 10:49

Code:
Jul 28 10:49:02 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-94d86058121177b1bd757d87ed5d0351a7c4436ebbb5ec23b3868cb877daf010-runc.Xgxda9.mount: Deactivated successfully.
Jul 28 10:49:04 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-8901c613534db6199f57118c84bc66ed0d28382bcafa407409f859d609528022-runc.uuti9I.mount: Deactivated successfully.
Jul 28 10:49:09 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-a6eeb5a6e931b7315df89dc6263ae8685a873b3f808b9087c6d858382153706f-runc.6xMtT2.mount: Deactivated successfully.
Jul 28 10:49:10 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-94d86058121177b1bd757d87ed5d0351a7c4436ebbb5ec23b3868cb877daf010-runc.6WtksS.mount: Deactivated successfully.
Jul 28 10:49:10 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-510c4ca1c166eceba5fda5cde74049f304a6845c5051c37ac5d6a165497908e1-runc.QLVI2j.mount: Deactivated successfully.
Jul 28 10:49:18 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-8d57d770ccab3ddb8b5ab160cb99dc463c814d23331cb99ba5c7025ba0b06a70-runc.hk8LOZ.mount: Deactivated successfully.
Jul 28 10:49:19 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-94d86058121177b1bd757d87ed5d0351a7c4436ebbb5ec23b3868cb877daf010-runc.8Yp4Bj.mount: Deactivated successfully.
Jul 28 10:49:21 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-94d86058121177b1bd757d87ed5d0351a7c4436ebbb5ec23b3868cb877daf010-runc.WUgfoz.mount: Deactivated successfully.
Jul 28 10:49:23 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-557e93a16fc14867ceaf1e5333b0b8add40dd2d6eae91cf94f13b063198462b5-runc.EhLm3L.mount: Deactivated successfully.
Jul 28 10:49:25 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-94d86058121177b1bd757d87ed5d0351a7c4436ebbb5ec23b3868cb877daf010-runc.3qEILk.mount: Deactivated successfully.
Jul 28 10:49:27 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-94d86058121177b1bd757d87ed5d0351a7c4436ebbb5ec23b3868cb877daf010-runc.W3LzHZ.mount: Deactivated successfully.
Jul 28 10:49:31 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-94d86058121177b1bd757d87ed5d0351a7c4436ebbb5ec23b3868cb877daf010-runc.sNWnO2.mount: Deactivated successfully.
Jul 28 10:49:34 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-8901c613534db6199f57118c84bc66ed0d28382bcafa407409f859d609528022-runc.EijFlj.mount: Deactivated successfully.
Jul 28 10:49:36 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-94d86058121177b1bd757d87ed5d0351a7c4436ebbb5ec23b3868cb877daf010-runc.bF9DzH.mount: Deactivated successfully.
Jul 28 10:49:38 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-94d86058121177b1bd757d87ed5d0351a7c4436ebbb5ec23b3868cb877daf010-runc.YMEOCh.mount: Deactivated successfully.
Jul 28 10:49:40 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-510c4ca1c166eceba5fda5cde74049f304a6845c5051c37ac5d6a165497908e1-runc.L230cU.mount: Deactivated successfully.
Jul 28 10:49:42 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-94d86058121177b1bd757d87ed5d0351a7c4436ebbb5ec23b3868cb877daf010-runc.fGcX8w.mount: Deactivated successfully.
Jul 28 10:49:48 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-8d57d770ccab3ddb8b5ab160cb99dc463c814d23331cb99ba5c7025ba0b06a70-runc.630nPv.mount: Deactivated successfully.
Jul 28 10:49:53 docker1 systemd[1]: run-docker-runtime\x2drunc-moby-1dd2ec452efe82c6f9acd158c403cda8b09b3f1ccec1dfaa50ddb1e75503ca3e-runc.Hgac9n.mount: Deactivated successfully.
-- Boot cbad462c1ed04ae4b17ba19d5b424f19 --
Jul 28 12:48:19 docker1 kernel: Linux version 6.8.0-39-generic (buildd@lcy02-amd64-112) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.2.0-23ubuntu4) 13.2.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #39-Ubuntu SMP PREEMPT_DYNAMIC Fri Jul  5 21:49:14 UTC 2024 (Ubuntu 6.8.0-39.39-generic 6.8.8)
Jul 28 12:48:19 docker1 kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.8.0-39-generic root=UUID=f90df74e-31c6-4338-b11e-6a796b24ce10 ro

Code:
Jul 28 10:17:01 pve1 CRON[1666221]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jul 28 10:17:01 pve1 CRON[1666222]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Jul 28 10:17:01 pve1 CRON[1666221]: pam_unix(cron:session): session closed for user root
Jul 28 10:18:40 pve1 pvestatd[906]: auth key pair too old, rotating..
Jul 28 10:29:04 pve1 smartd[533]: Device: /dev/sda [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 69 to 72
Jul 28 10:29:04 pve1 smartd[533]: Device: /dev/sdb [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 82 to 83
Jul 28 10:59:04 pve1 smartd[533]: Device: /dev/sda [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 72 to 73
Jul 28 11:17:01 pve1 CRON[1675683]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jul 28 11:17:01 pve1 CRON[1675684]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Jul 28 11:17:01 pve1 CRON[1675683]: pam_unix(cron:session): session closed for user root
Jul 28 11:29:10 pve1 smartd[533]: Device: /dev/sdb [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 65 to 66
 
Unfortunately, I can't see any obvious reason for the issue from the logs/outputs. Is there anything special inside the VM that happens around the time of the freezes?
 
Unfortunately, I can't see any obvious reason for the issue from the logs/outputs. Is there anything special inside the VM that happens around the time of the freezes?
The VM just runs docker compose with 3 services (Nextcloud aio, Ngninx Proxy Manager and Syncthing) so no obvious scheduled tasks that I know of except the standard cron stuff of Nextcloud. When looking at the node summery graphs I notice that the server load, CPU and RAM increases before the freeze.
I have no idea of how to interpret the output of qm status 100 --verbose, but are the values of scsi0 normal? flush_total_time "feels" like a lot.
Code:
    scsi0:
        account_failed: 1
        account_invalid: 1
        failed_flush_operations: 0
        failed_rd_operations: 0
        failed_unmap_operations: 0
        failed_wr_operations: 0
        failed_zone_append_operations: 0
        flush_operations: 2658104
        flush_total_time_ns: 4513061685828
Edit: Never mind, the numbers are high even when the VM is running before a freeze
 
Last edited:
Freeze happened again - still no clues except I've noticed these messages in the guest journalctl: kernel: RPC: Could not send backchannel reply error: -110
I have NFS mounts on my guest (exported from Proxmox host) so maybe it could be related to https://forum.proxmox.com/threads/memory-leak-on-6-8-4-2-3-pve-8-2.146649/post-685686 except apparently that should already be fixed (https://lists.proxmox.com/pipermail/pve-devel/2024-July/064614.html).
I've changed my NFS mounts to soft to see if that will prevent a freeze; I'm worried what an interrupted NFS mount will do to my data though :-S
 
I'm on 6.8.8-4-pve though. I didn't want to risk NFS soft mounts, so I've tried adding the boot option intr, but unfortunately it's apparently deprecated: docker1 kernel: nfs: Deprecated parameter 'intr'
 
Freeze happened again - still no clues except I've noticed these messages in the guest journalctl: kernel: RPC: Could not send backchannel reply error: -110
Oh sorry, read too quickly and didn't notice that it was in the guest. But it might very well be that the guest kernel is affected by the mentioned bug. After all Proxmox VE's kernel is based off of the Ubuntu one, and maybe Ubuntu hasn't yet backported the fix.
 
I think you run in a still not fixed problem with io threading. Try to disable iothread.

-> scsi0: cryptssd:100/vm-100-disk-0.raw,iothread=0,size=256G

And see if that does fix your problem.
 
Hi,
I think you run in a still not fixed problem with io threading. Try to disable iothread.
what problem are you referring to exactly?
 
  • Like
Reactions: carles89

Yes,

But i realzed some time later that this error messages didn't always show up. We had nodes that didn't hint any errors on the Proxmox nodes or the VMs. It was more like all systems think they are doing very well and are completly OK ..... Except that the VMs didn't do anything anymore.
It's more like the VMs did wait for something.
 
Yes,

But i realzed some time later that this error messages didn't always show up. We had nodes that didn't hint any errors on the Proxmox nodes or the VMs. It was more like all systems think they are doing very well and are completly OK ..... Except that the VMs didn't do anything anymore.
It's more like the VMs did wait for something.
Thanks for the tip. I'll give it a try after the next freeze
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!