Trying to take a snapshot backup does not work.
it's looks like this:
https://forum.proxmox.com/threads/vm-hang-during-backup-fs-freeze.80152/
So far I've only seen this on the guest with Debian 11 (and MariaDB from mariadb.org)
and freeze
in guest syslog:
Host:
any suggestion ?
it's looks like this:
https://forum.proxmox.com/threads/vm-hang-during-backup-fs-freeze.80152/
So far I've only seen this on the guest with Debian 11 (and MariaDB from mariadb.org)
Code:
INFO: starting new backup job: vzdump 144 --node kvm02 --remove 0 --mode snapshot --storage local --compress zstd
INFO: Starting Backup of VM 144 (qemu)
INFO: Backup started at 2021-11-17 16:29:37
INFO: status = running
INFO: VM Name: NFY-isengard
INFO: include disk 'scsi0' 'local-zfs:vm-144-disk-0' 60G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-qemu-144-2021_11_17-16_29_37.vma.zst'
INFO: issuing guest-agent 'fs-freeze' command
in guest syslog:
Code:
Nov 17 16:29:36 isengard qemu-ga: info: guest-ping called
Nov 17 16:29:37 isengard qemu-ga: info: guest-fsfreeze called
Nov 17 16:32:06 isengard kernel: [ 363.556779] INFO: task qemu-ga:370 blocked for more than 120 seconds.
Nov 17 16:32:06 isengard kernel: [ 363.556814] Not tainted 5.10.0-9-amd64 #1 Debian 5.10.70-1
Nov 17 16:32:06 isengard kernel: [ 363.556829] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 17 16:32:06 isengard kernel: [ 363.556852] task:qemu-ga state:D stack: 0 pid: 370 ppid: 1 flags:0x00004000
Nov 17 16:32:06 isengard kernel: [ 363.556861] Call Trace:
Nov 17 16:32:06 isengard kernel: [ 363.556881] __schedule+0x282/0x870
Nov 17 16:32:06 isengard kernel: [ 363.556886] schedule+0x46/0xb0
Nov 17 16:32:06 isengard kernel: [ 363.556888] percpu_down_write+0xd2/0xe0
Nov 17 16:32:06 isengard kernel: [ 363.556891] freeze_super+0x7f/0x130
Nov 17 16:32:06 isengard kernel: [ 363.556893] __x64_sys_ioctl+0x62/0xb0
Nov 17 16:32:06 isengard kernel: [ 363.556895] do_syscall_64+0x33/0x80
Nov 17 16:32:06 isengard kernel: [ 363.556897] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Nov 17 16:32:06 isengard kernel: [ 363.556903] RIP: 0033:0x7f26d5183cc7
Nov 17 16:32:06 isengard kernel: [ 363.556905] RSP: 002b:00007ffc4a00b2f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Nov 17 16:32:06 isengard kernel: [ 363.556907] RAX: ffffffffffffffda RBX: 0000559616e42790 RCX: 00007f26d5183cc7
Nov 17 16:32:06 isengard kernel: [ 363.556916] RDX: 0000000000080000 RSI: 00000000c0045877 RDI: 0000000000000006
Nov 17 16:32:06 isengard kernel: [ 363.556917] RBP: 0000000000000000 R08: 0000000000000000 R09: 000000000000002a
Nov 17 16:32:06 isengard kernel: [ 363.556918] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
Nov 17 16:32:06 isengard kernel: [ 363.556927] R13: 00007ffc4a00b3f8 R14: 00000000c0045877 R15: 0000000000000006
Nov 17 16:36:07 isengard kernel: [ 605.220798] INFO: task qemu-ga:370 blocked for more than 120 seconds.
Nov 17 16:36:07 isengard kernel: [ 605.220837] Not tainted 5.10.0-9-amd64 #1 Debian 5.10.70-1
Nov 17 16:36:07 isengard kernel: [ 605.220860] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 17 16:36:07 isengard kernel: [ 605.220893] task:qemu-ga state:D stack: 0 pid: 370 ppid: 1 flags:0x00004000
Nov 17 16:36:07 isengard kernel: [ 605.220896] Call Trace:
Nov 17 16:36:07 isengard kernel: [ 605.220904] __schedule+0x282/0x870
Nov 17 16:36:07 isengard kernel: [ 605.220906] schedule+0x46/0xb0
Nov 17 16:36:07 isengard kernel: [ 605.220909] percpu_down_write+0xd2/0xe0
Nov 17 16:36:07 isengard kernel: [ 605.220911] freeze_super+0x7f/0x130
Nov 17 16:36:07 isengard kernel: [ 605.220914] __x64_sys_ioctl+0x62/0xb0
Nov 17 16:36:07 isengard kernel: [ 605.220925] do_syscall_64+0x33/0x80
Nov 17 16:36:07 isengard kernel: [ 605.220926] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Nov 17 16:36:07 isengard kernel: [ 605.220929] RIP: 0033:0x7f26d5183cc7
Nov 17 16:36:07 isengard kernel: [ 605.220932] RSP: 002b:00007ffc4a00b2f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Nov 17 16:36:07 isengard kernel: [ 605.220934] RAX: ffffffffffffffda RBX: 0000559616e42790 RCX: 00007f26d5183cc7
Nov 17 16:36:07 isengard kernel: [ 605.220935] RDX: 0000000000080000 RSI: 00000000c0045877 RDI: 0000000000000006
Nov 17 16:36:07 isengard kernel: [ 605.220935] RBP: 0000000000000000 R08: 0000000000000000 R09: 000000000000002a
Nov 17 16:36:07 isengard kernel: [ 605.220936] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
Nov 17 16:36:07 isengard kernel: [ 605.220936] R13: 00007ffc4a00b3f8 R14: 00000000c0045877 R15: 0000000000000006
Host:
Code:
proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-helper: 6.4-8
pve-kernel-5.4: 6.4-7
pve-kernel-5.4.143-1-pve: 5.4.143-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve1~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.6-pve1~bpo10+1
any suggestion ?