Backup of VM failed - CT is locked (snapshot-delete)

pcao

Member
Jul 8, 2021
6
0
6
Hello all,

For some time I have a pb with a backup of a CT:
The log says:

Code:
104: 2021-11-22 05:23:39 INFO: Starting Backup of VM 104 (lxc)
104: 2021-11-22 05:23:39 INFO: status = running
104: 2021-11-22 05:23:39 ERROR: Backup of VM 104 failed - CT is locked (snapshot-delete)

I try some command like pct unlock 104, pct delsnapshot 104 vzdump (see below all cmds) but I still cant make my backup.

I find an old process from 21 oct trying to make a tar but I cant kill (or kill -9) it. It may be redo by some other command ????

Bash:
pgrep -af 104
3844 [lxc monitor] /var/lib/lxc 104
318164 tar cpf - --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs --xattrs-include=user.* --xattrs-include=security.capability --warning=no-file-ignored --warning=no-xattr-write --one-file-system --warning=no-file-ignored --directory=/var/lib/vz/dump/vzdump-lxc-104-2021_10_21-05_25_56.tmp ./etc/vzdump/pct.conf ./etc/vzdump/pct.fw --directory=/mnt/vzsnap0 --no-anchored --exclude=lost+found --anchored --exclude=./tmp/?* --exclude=./var/tmp/?* --exclude=./var/run/?*.pid ./

IF anyone has an idea ....

Patrick

More details:
Bash:
pct listsnapshot 104
vzdump               no-parent            vzdump backup snapshot
current              vzdump               You are here!

pct delsnapshot 104 vzdump
CT is locked (snapshot-delete)

pct unlock 104

pct delsnapshot 104 vzdump
rbd: sysfs write failed
can't unmap rbd device /dev/rbd/mon_pool/vm-104-disk-1@vzdump: rbd: sysfs write failed

rbd snap ls mon_pool/vm-104-disk-1
SNAPID NAME    SIZE TIMESTAMP
  8764 vzdump 32GiB Thu Oct 21 05:25:58 2021

rbd snap rm mon_pool/vm-104-disk-1@vzdump
Removing snap: 100% complete...done.

rbd showmapped
id pool     image         snap   device
[...]
7  mon_pool vm-104-disk-1 vzdump /dev/rbd7

rbd unmap mon_pool/vm-104-disk-1@vzdump
rbd: sysfs write failed
rbd: unmap failed: (16) Device or resource busy

Context: proxmox 5.4-15 -- I know it's very old, but a new cluster it's not free, so I must stay with 5 for now
The CT is running and in production !