QCOW2 snapshot missing after delete

proudcanadianeh

New Member
Dec 13, 2025
3
0
1
Hoping for some help with this, very new to proxmox and trying to rapidly learn more about the platform to evaluate deploying it across a few clusters.

Right now I have a test cluster setup at my desk connected via ISCSI to a Synology LUN. I attempted to test migrating a VM from one host to another for the first time and it failed because I had snapshots on it. Ok, no problem I was testing those on this VM. I had a snapshot called Test8 and one called Test9.

I selected Test8 and told it to delete, the output provided the below error. The end result is my VM now thinks Test8 still exists, but cannot do any further snapshot related operations because the test8 qcow2 disk no longer exists. Would love some advice!

(If it matters, Snapshot 9 had the running memory saved while 8 did not)

delete qemu external snapshot
delete first snapshot Test8
block-commit Test9 to base:Test8
commit-drive-scsi0: transferred 4.2 MiB of 100.0 GiB (0.00%) in 0s
commit-drive-scsi0: transferred 3.3 GiB of 100.0 GiB (3.34%) in 1s
commit-drive-scsi0: transferred 22.3 GiB of 100.0 GiB (22.32%) in 2s
commit-drive-scsi0: transferred 49.3 GiB of 100.0 GiB (49.34%) in 3s
commit-drive-scsi0: commit-job finished
delete old /dev/Synology-VG/snap_vm-100-disk-0_Test9.qcow2
Logical volume "snap_vm-100-disk-0_Test9.qcow2" successfully removed.
Renamed "snap_vm-100-disk-0_Test8.qcow2" to "snap_vm-100-disk-0_Test9.qcow2" in volume group "Synology-VG"
blockdev replace Test8 by Test9
delete qemu external snapshot
delete first snapshot Test8
block-commit Test9 to base:Test8
commit-drive-efidisk0: transferred 0.0 B of 528.0 KiB (0.00%) in 0s
commit-drive-efidisk0: commit-job finished
delete old /dev/Synology-VG/snap_vm-100-disk-1_Test9.qcow2
Logical volume "snap_vm-100-disk-1_Test9.qcow2" successfully removed.
Renamed "snap_vm-100-disk-1_Test8.qcow2" to "snap_vm-100-disk-1_Test9.qcow2" in volume group "Synology-VG"
blockdev replace Test8 by Test9
TASK ERROR: VM 100 qmp command 'blockdev-reopen' failed - Cannot change the option 'aio'
 
You have a working cluster with a shared lun? If so, your setup is probably not correctly set up. With a shared lun, you don't have to copy the vm disk files, as those should be already shared. If you don't have a shared setup, don't run a cluster. You will not have a lot of fun with it.
 
You have a working cluster with a shared lun? If so, your setup is probably not correctly set up. With a shared lun, you don't have to copy the vm disk files, as those should be already shared. If you don't have a shared setup, don't run a cluster. You will not have a lot of fun with it.
It is a shared lun that seems to be working fully across the nodes in my custer, for migrating I was trying to move the active VM not the underlying disks. Live migration works for my other VM's, but this one because of the failed snapshot deletion seems to just be stuck unable to migrate, unable to remove snapshots, but otherwise bootable and working.