Hi,
in the last weeks, I would say since update to 5.1, I receive this behaviour:
Backups from ceph are made to a NFS Storage.
I receive this error, almost daily:
So I delete the snapshopt, next time vzdump want to run a backup I receive this error:
When I move the CT to another node, backup of it can be made. If I move the CT back to the previous node, I get the same error like above.
How can I get rid of this?
in the last weeks, I would say since update to 5.1, I receive this behaviour:
Backups from ceph are made to a NFS Storage.
I receive this error, almost daily:
Code:
rbd snapshot 'vm-208-disk-2' error: rbd: failed to create snapshot: (17) File exists
Code:
INFO: starting new backup job: vzdump 208 --storage Backup --node ceph6 --mode snapshot --compress lzo --remove 0
INFO: Starting Backup of VM 208 (lxc)
INFO: status = running
INFO: CT Name: www.example.at
INFO: found old vzdump snapshot (force removal)
rbd: sysfs write failed
can't unmap rbd volume vm-208-disk-2: rbd: sysfs write failed
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
mount: /dev/rbd11 is already mounted or /mnt/vzsnap0 busy
umount: /mnt/vzsnap0/: not mounted
command 'umount -l -d /mnt/vzsnap0/' failed: exit code 32
ERROR: Backup of VM 208 failed - command 'mount -o ro,noload /dev/rbd11 /mnt/vzsnap0//' failed: exit code 32
INFO: Backup job finished with errors
TASK ERROR: job errors
How can I get rid of this?