Snapshot causes VM to become unresponsive.

Don't you need ZFS for snapshots?
No, qcow2 supports snapshots too. Your deleted post shows that the main thread n QEMU is still busy doing IO, so might very well be that the snapshot was just not finished. ZFS can be much faster than huge qcow2 files for snapshots though.
 
Hi... I have the same problem. It occurred with PVE version 9.0.5. I updated to 9.0.6 and I have the same problem.

I took a snapshot including RAM, and the host became inaccessible (sometimes it only respond to ping). When I took a snapshot without RAM, everything worked fine.

Code:
qm config 113
boot: order=scsi0
cores: 8
cpu: host
memory: 8192
meta: creation-qemu=9.2.0,ctime=1751910974
name: srvpaffrw001
net0: virtio=BC:24:11:65:D3:71,bridge=vmbr4,firewall=1,link_down=1
net1: virtio=BC:24:11:6F:08:3F,bridge=vmbr6,firewall=1,link_down=1
net10: virtio=BC:24:11:7B:91:9F,bridge=vmbr131,firewall=1,link_down=1
net2: virtio=BC:24:11:8A:5F:61,bridge=vmbr0,firewall=1
net3: virtio=BC:24:11:ED:0F:00,bridge=vmbr100,firewall=1,link_down=1
net4: virtio=BC:24:11:A9:3B:40,bridge=vmbr110,firewall=1,link_down=1
net5: virtio=BC:24:11:A9:9A:0F,bridge=vmbr120,firewall=1,link_down=1
net6: virtio=BC:24:11:F1:26:23,bridge=vmbr130,firewall=1,link_down=1
net7: virtio=BC:24:11:8F:7E:E9,bridge=vmbr101,firewall=1,link_down=1
net8: virtio=BC:24:11:5E:36:18,bridge=vmbr111,firewall=1,link_down=1
net9: virtio=BC:24:11:0C:65:49,bridge=vmbr121,firewall=1,link_down=1
numa: 0
ostype: l26
parent: BACKUP01
scsi0: VG01_LV01_PVE005:vm-113-disk-0,iothread=1,size=80G
scsihw: virtio-scsi-single
smbios1: uuid=15f83ce4-cb33-45d5-b671-7a145bb74991
sockets: 1
startup: order=1
vmgenid: 2a49f77a-f31b-4bef-98d1-7a113ebcc526


Code:
qm status 113 --verbose
cpus: 8
disk: 0
diskread: 0
diskwrite: 0
lock: snapshot
maxdisk: 85899345920
maxmem: 8589934592
mem: 8592143360
memhost: 8592143360
name: srvpaffrw001
netin: 30754541
netout: 2848030
nics:
        tap113i0:
                netin: 21382609
                netout: 0
        tap113i1:
                netin: 731553
                netout: 0
        tap113i10:
                netin: 210
                netout: 0
        tap113i2:
                netin: 2413048
                netout: 2848030
        tap113i3:
                netin: 1291711
                netout: 0
        tap113i4:
                netin: 73217
                netout: 0
        tap113i5:
                netin: 3713153
                netout: 0
        tap113i6:
                netin: 1148480
                netout: 0
        tap113i7:
                netin: 210
                netout: 0
        tap113i8:
                netin: 70
                netout: 0
        tap113i9:
                netin: 280
                netout: 0
pid: 2236560
pressurecpufull: 0
pressurecpusome: 0
pressureiofull: 0
pressureiosome: 0
pressurememoryfull: 0
pressurememorysome: 0
proxmox-support:
qmpstatus: running
status: running
uptime: 2898
vmid: 113

The VM stayed in locked status for a long time. After that, become online again, but inaccessible.
One thing I don't know if is ok: when the VM freezes and I get this verbose status, the MEM (8592143360) parameter is greater than MAXMEM (8589934592). In PVE GUI I can see 100.03% memory usage. Some type of memory overflow?


Code:
pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.6 (running version: 9.0.6/49c767b70aeb6648)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.14: 6.14.11-1
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8: 6.8.12-13
proxmox-kernel-6.8.12-11-pve-signed: 6.8.12-11
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx10
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-1
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.2
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.1
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.10
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-4
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.19
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.4-pve1

There is a patch for this bug?