[solved] Remove Orphaned Snapshot?

Kafoof

Active Member
Oct 17, 2018
19
1
43
Tokyo
Hi,

We had a small outage due to a OSD becoming full on CEPH. At the time this was during a clone of a VM.
This particular VM had a series of snapshots that I am trying to delete but when issue the delete command I get the error:
Code:
TASK ERROR: VM 130 qmp command 'blockdev-snapshot-delete-internal-sync' failed - Snapshot with id 'null' and name 'everythingwrong' does not exist on device 'drive-scsi1'

However when checking on the CEPH side I can see the snapshots are no longer in the pool
Code:
[root@pm04]:~# rbd list --pool CEPH-pool1 | grep 130
vm-130-disk-0
vm-130-disk-1
vm-130-disk-2
vm-130-disk-3
vm-130-disk-4
vm-130-disk-5
It appears that VM configured is not consistent with whats on the CEPH pool.
If there a way to force remove the orphaned snapshots?

Thanks in advance!

PVE Versions below
Code:
[root@tat-srvpm04]:~# pveversion  -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.18-1-pve: 5.3.18-1
pve-kernel-5.3.13-3-pve: 5.3.13-3
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-4-pve: 5.0.21-9
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
Sorry for the post, I found the answer
Code:
qm delsnapshot 130 prerestart --force
qm delsnapshot 130 aftermanualcheck --force
qm delsnapshot 130 aftermanualwork --force
qm delsnapshot 130 premigration --force
 
  • Like
Reactions: tuxick
Sorry for the post, I found the answer
Code:
qm delsnapshot 130 prerestart --force
qm delsnapshot 130 aftermanualcheck --force
qm delsnapshot 130 aftermanualwork --force
qm delsnapshot 130 premigration --force
Great, that does the trick indeed, just threw warning like
Code:
lvremove snapshot 'pve/snap_vm-101-disk-0_premigration' error:   Failed to find logical volume "pve/snap_vm-101-disk-0_premigration"
which was the problem indeed
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!