qmp command 'guest-fsfreeze-freeze' failed - got timeout | vm freeze not coming back

ran

Active Member
Feb 13, 2019
24
2
43
24
Hi

I got an issue with a specific VM that freezes while snapshot is running ,and does not come back online , the file system is just freezing without getting back online

this is the log from the task itself:

guest-fsfreeze-freeze problems - VM xxx qmp command 'guest-fsfreeze-freeze' failed - got timeout
guest-fsfreeze-thaw problems - VM xxx qmp command 'guest-fsfreeze-thaw' failed - got timeout
TASK OK

by the way it also took around 1 hour to complete, other VMS were fine while snapshoting

the only difference i found between other VMS and the problematic VM is that it had an extra disk which wasn't even mounted yet, i was wondering if it's possible that the freeze command couldn't execute a freeze for the other disk because it wasn't mounted yet , or is it a bug

this is the log from within the VM (/var/log/messages)

Jun 24 02:04:09 qemu-ga: info: guest-ping called
Jun 24 02:04:09 qemu-ga: info: guest-fsfreeze called
Jun 24 02:04:09 qemu-ga: info: executing fsfreeze hook with arg 'freeze'

please advice

Thanks.
 
Last edited:
The agent is running well and responsive

this is the output of pveversion -v :


proxmox-ve: 6.1-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-2
pve-cluster: 6.1-2
pve-container: 3.0-16
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
pve-zsync: 2.0-3
qemu-server: 6.1-4
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

also this is the VM config:

agent: 1
bios: seabios
boot: cdn
bootdisk: scsi0
cipassword: XXXXXXXXXX
ciuser: root
cores: 10
cpu: host
hotplug: disk,network,usb,memory,cpu
ide2: XXXXXX:XXX/vm-XXX-cloudinit.qcow2,media=cdrom
ipconfig0: ip=X.X.X.X/24,gw=X.X.X.X
localtime: 1
memory: 4096
name: XX.XXXX.XXX
net0: virtio=XX:XX:XX:XX,bridge=vmbr0,firewall=0,rate=25
numa: 1
onboot: 1
ostype: l26
parent: autodaily200624
scsi0: LOCALZFS:vm-XXXX-disk-0,cache=none,size=200G
scsi1: LOCALZFS:vm-XXXX-disk-1,cache=none,size=150G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=a3374995-dbdb-40bc-915f-cd62fd164b43
sockets: 1
vcpus: 10
vga: std
vmgenid: 75898c96-1abc-4624-9d41-939a19c20187



Thanks
 
Last edited:
yes , the VM has been shutdown many times before , and it was stuck on more than one occasion , as i mentioned other vms were fine , the only difference is the unmounted disk it had.
 
Please try to update qemu-server and pve-qemu-kvm to the current version, apt update && full-upgrade

then try again hopefully works :)