I have an LXC container running on a Ceph pool. It's running just fine. It has a nightly backup set up to run. For the last week or so, the backup keeps on failing with:
If I list snapshots of that container, I can see a snapshot left behind.
Even if I clean up the snapshot with:
... it doesn't help.
If I run a manual backup to the same storage backend, a NAS, but a different mount point, it generally works. I'm a bit baffled as to why this is failing.
Anyone have any suggestions or ideas for why this is failing?
Code:
vzdump 60180 --mailnotification always --mode snapshot --mailto someone@example.com --quiet 1 --storage nas01-backup-7x --compress lzo
60180: 2018-08-29 01:00:02 INFO: Starting Backup of VM 60180 (lxc)
60180: 2018-08-29 01:00:02 INFO: status = running
60180: 2018-08-29 01:00:02 INFO: CT Name: my-lxc-container
60180: 2018-08-29 01:00:02 INFO: found old vzdump snapshot (force removal)
60180: 2018-08-29 01:00:02 INFO: backup mode: snapshot
60180: 2018-08-29 01:00:02 INFO: ionice priority: 7
60180: 2018-08-29 01:00:02 INFO: create storage snapshot 'vzdump'
60180: 2018-08-29 01:00:03 ERROR: Backup of VM 60180 failed - command 'mount -o ro,noload /dev/rbd2 /mnt/vzsnap0//' failed: exit code 32
If I list snapshots of that container, I can see a snapshot left behind.
Code:
# rbd --pool ceph snap ls vm-60180-disk-1
SNAPID NAME SIZE TIMESTAMP
63 vzdump 65536 MB Wed Aug 29 01:00:03 2018
Even if I clean up the snapshot with:
Code:
# rbd snap rm ceph/vm-60180-disk-1@vzdump
... it doesn't help.
If I run a manual backup to the same storage backend, a NAS, but a different mount point, it generally works. I'm a bit baffled as to why this is failing.
Anyone have any suggestions or ideas for why this is failing?