Removing Snapshots CLI

SawKyrom

New Member
Jun 3, 2021
20
1
3
52
Proxmox stopped working secondary to LVM full storage.

LVS command shows data 100% and several files labeled snap_vm...

I would like to delete these snapshots so that I can restore functionality for the VM's, which currently will not load secondary to full storage.

I attempted: qm delsnapshot 105 snap_vm-105-disk-0_pfSense20210621 only to get error snapshot 'snap...' does not exist. The file is listed as such with lvs.

Is there another means to deleting snapshots in order to free space?
 
Perhaps you have orphaned snapshots, this quick loop should tell you what proxmox thinks is present and deleteable:

for v in $(qm list|egrep -v VMID|awk '{print $1}');do for s in $(qm listsnapshot $v|egrep -v current|awk '{print $2}');do echo qm delsnapshot $v $s;done;done
qm delsnapshot 200 snap1
qm delsnapshot 200 snap2

If there are other snapshots you can try removing the them directly at your own risk.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
For anyone who urgently needs space and can sacrifice a VM in order to free LVM-thin space and thus regain functionality, you can use the following command to permanently remove a sacrificial VM:

lvs
remove <vm name>

lvs will show LV data with <100% and should function once more.

*Do NOT try to extend the lvm-thin drive with the following command:

lvresize --size +1G --poolmetadatasize +16M <VG/LV>

Doing so results in reboot error:

Volume group "pve" not found
Cannot process volume group pve
/dev/mapper/pve-root: recovering journal
/dev/mapper/pve-root: clean, 66068/4308992 files, 8793010/17235969 blocks

I am now unable to access proxmox via host IP:8006. Only command line CLI functions. Reports error:
EXT4-fs error (device dm-15): ext4_mb_generate_buddy:747: group 1, block bitmap and bg descriptor inconsistent: 2616 vs 613 free clusters
 
@bbgeek17 Thank you for the suggestion. The code entered returned error: -bash: awk{print $1}: command not found.

However, using your code I was able to tease out the correct means to removing snapshot from CLI:

lvs (list snapshots on VG designated as snap_*) qm listsnapshot <VM number> (will list the snapshot name after -> designation) qm delsnapshot <VM number> <snapshot name>

This will free up valuable space on LVM-thin when the Data% 100 and you receive I/O error.

Now I need to start a new Topic on the

"EXT4-fs error (device dm-15): ext4_mb_generate_buddy:747: group 1, block bitmap and bg descriptor inconsistent: 2616 vs 613 free clusters"

I caused by doing an lvresize on VG/LV. :oops:
 
Hello!

I have just recently run into a similar problem. See output below.
Any advice on how to delete the snapshots dckr01 and j4?

The auto snapshots are readily deletable from both GUI and CLI, the two mentioned above I snapshot manually before some tweaks, are not, from neither GUI or CLI.

root@cyndane5:~# for v in $(qm list|egrep -v VMID|awk '{print $1}');do for s in $(qm listsnapshot $v|egrep -v current|awk '{print $2}');do echo qm delsnapshot $v $s;done;done
qm delsnapshot 112 dckr01
qm delsnapshot 112 autodaily250314030046
qm delsnapshot 112 autoweekly250316033135
qm delsnapshot 113 j4
qm delsnapshot 113 autodaily250314030051
qm delsnapshot 113 autoweekly250316033143

root@cyndane5:~# qm delsnapshot 112 dckr01
lvremove snapshot 'pve/snap_vm-112-disk-0_dckr01' error: Failed to find logical volume "pve/snap_vm-112-disk-0_dckr01"

root@cyndane5:~# qm delsnapshot 113 j4
lvremove snapshot 'pve/snap_vm-113-disk-0_j4' error: Failed to find logical volume "pve/snap_vm-113-disk-0_j4"
root@cyndane5:~#


Thanks for any suggestions on how to proceed.!
 
You can carefully edit the VM config file to remove snapshot information, since those volumes appear to have been deleted:
/etc/pve/qemu-server/113.conf

Make sure to make a backup of the file first.

That simple?
Thanks, will try!

Also, when looking for the actual snapshot file I couldn't find it on the node.
Would this be an indication the snapshot file is actually gone but remains in the 113.conf only for some reason?
 
Last edited: