[SOLVED] Cannot remove Snapshot ( VM is locked (snapshot-delete) )

Esben Viborg

Member
Oct 12, 2017
22
0
6
39
Hi

Yesterday i took a snapshot of one of my VM's. The snapshot failed for some reason. When i try to run it again i get an error: "VM is locked (snapshot-delete)"

i have unlocked the vm with "qm unlock 101", and when i try to delete it after the unlock i get the following error:

TASK ERROR: lvremove snapshot 'pve/snap_vm-101-disk-1_test' error: Failed to find logical volume "pve/snap_vm-101-disk-1_test"

Any idea on how i resolve this?

/Glacier
 
The snapshot entry is stored in the /etc/pve/qemu-server/<vmid>.conf file of your VM, you can delete the entry by hand.

Alwin,

Is this accomplished by removing or altering lines in the /etc/pve/qemu-server/<vmid>.conf file? I'm not seeing documentation on this. I have a snapshot that is unable to be deleted and is preventing me from growing a critically low disk drive. Any assistance would be greatly appreciated.

Thanks!
 
Last edited:
Alwin,

Is this accomplished by removing or altering lines in the /etc/pve/qemu-server/<vmid>.conf file? I'm not seeing documentation on this. I have a snapshot that is unable to be deleted and is preventing me from growing a critically low disk drive. Any assistance would be greatly appreciated.

Thanks!
Just remove the line lock:snapshot
 
  • Like
Reactions: dwalme and linkstat
Happened to me today after stopping a manual snapshot task because I forgot to uncheck RAM while it was writing RAM to disk.
The vm remained killed and locked. The snapshot also remained in snapshot list but NOW was not shown as child.
After `qm unlock`, trying to remove snapshot, resulted in

> TASK ERROR: zfs error: could not find any snapshots to destroy; check snapshot names.

The vm remained locked again. So `qm unlock` 'ed it. The vm also refused to start after that.
Manually removing the orphaned snapshot entry from vm config fixed it.
 
I just did a test of this
Bash:
Pre-snapshot delete

root@proxmox1:~# lsblk -f
NAME  FSTYPE FSVER LABEL       UUID                                   FSAVAIL FSUSE% MOUNTPOINT
loop0                                                                              
sda                                                                                
│                                        
└─sda3
      LVM2_m LVM2              **************************************              
  ├─pve-swap
  │   swap   1                 ***********************************                  [SWAP]
  └─pve-root
      ext4   1.0               ************************************     53.1G    70% /

Then I deleted the problematic snapshot

Bash:
root@proxmox1:~# lsblk -f
NAME  FSTYPE FSVER LABEL       UUID                                   FSAVAIL FSUSE% MOUNTPOINT
loop0                                                                              
sda
│                                                                                                                            
└─sda3
      LVM2_m LVM2              **************************************               
  ├─pve-swap
  │   swap   1                 *********************************                  [SWAP]
  └─pve-root
      ext4   1.0               **********************************     53.1G    70% /

No change in disk usage

Then I figured maybe since the problem started when I had a problem deleting a snapshot the snapshot storage had already been removed. So I created a new snapshot

After new snapshot created
Bash:
root@proxmox1:~# lsblk -f
NAME  FSTYPE FSVER LABEL       UUID                                   FSAVAIL FSUSE% MOUNTPOINT
loop0                                                                              
sda
│                                                                                                                            
└─sda3
      LVM2_m LVM2              **************************************               
  ├─pve-swap
  │   swap   1                 *********************************                  [SWAP]
  └─pve-root
      ext4   1.0               **********************************     53G    70% /

So it looks like this snapshot at least took only 0.1G

Then I removed this snapshot by deleting it's entry

Bash:
root@proxmox1:~# lsblk -f
NAME  FSTYPE FSVER LABEL       UUID                                   FSAVAIL FSUSE% MOUNTPOINT
loop0                                                                              
sda
│                                                                                                                            
└─sda3
      LVM2_m LVM2              **************************************               
  ├─pve-swap
  │   swap   1                 *********************************                  [SWAP]
  └─pve-root
      ext4   1.0               **********************************     53G    70% /

No change in space used. So it's still using the 0.1G that it had been using for that snapshot, the snapshot is still taking space, but is just inaccessible in the UI.

How do I regain the space that the snapshot took? the snapshot in question had a lot added to it. I took the snapshot in case there was an issue after I tried to rollback. There wasn't, so I tried to delete it. It sat deleting for a long time before giving an error. There was probably 8GB or more added when that snapshot was taken, so I assume that is taking up space somewhere, how do I reclaim it?

Edit after a bit of time it did actually go back to 53.2G available, so it wasn't immediate but it worked
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!