Hello
Environment: Debian 11 & Proxmox 7.0-11
I come to you because I encounter a backup problem in snapshot mode on my proxmox server and that for two LXC containers, the others no worries
All of my containers are saved every week without any worries.
Except this Saturday, two of my vm give me the same error message
The error line : 101: 2021-08-21 19:55:51 ERROR: Backup of VM 101 failed - command 'rsync --stats -h -X -A --numeric-ids -aH --delete --no-whole-file --inplace --one-file-system --relative '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' /proc/1088/root//./ /var/lib/vz-raid//dump/vzdump-lxc-101-2021_08_21-19_52_57.tmp' failed: exit code 23
The storage is local and there is more than 1 TB of space for info
I tested the other backup modes:
- snapshot (the one I use by default and which worked until this week)
- suspend (same result on these two vms -> KO)
- stop (backup OK on these two vms but stop unwanted for vm in production)
Actions tested but unsuccessful
- I stopped the vms
- I rebooted the proxmox server
- I tested to another storage (nas on nfs)
The only thing I see is the upgrade from debian 10 to debian 11 and from proxmox 6 to promox 7 which could be the cause of problems... but why only on these two containers and the other 20 are OK
If anyone has an idea or a test that I could still do, or who has already encountered the problem, I am interested in your ideas and your feedback
Thanks
Environment: Debian 11 & Proxmox 7.0-11
I come to you because I encounter a backup problem in snapshot mode on my proxmox server and that for two LXC containers, the others no worries
All of my containers are saved every week without any worries.
Except this Saturday, two of my vm give me the same error message
Code:
vzdump 101 --mode snapshot --storage raid --remove 0 --compress zstd --node pve1
101: 2021-08-21 19:52:57 INFO: Starting Backup of VM 101 (lxc)
101: 2021-08-21 19:52:57 INFO: status = running
101: 2021-08-21 19:52:57 INFO: CT Name: Centreon
101: 2021-08-21 19:52:57 INFO: including mount point rootfs ('/') in backup
101: 2021-08-21 19:52:57 INFO: mode failure - some volumes do not support snapshots
101: 2021-08-21 19:52:57 INFO: trying 'suspend' mode instead
101: 2021-08-21 19:52:57 INFO: backup mode: suspend
101: 2021-08-21 19:52:57 INFO: ionice priority: 7
101: 2021-08-21 19:52:57 INFO: CT Name: Centreon
101: 2021-08-21 19:52:57 INFO: including mount point rootfs ('/') in backup
101: 2021-08-21 19:52:57 INFO: starting first sync /proc/1088/root/ to /var/lib/vz-raid//dump/vzdump-lxc-101-2021_08_21-19_52_57.tmp
101: 2021-08-21 19:55:30 INFO: first sync finished - transferred 18.05G bytes in 153s
101: 2021-08-21 19:55:30 INFO: suspending guest
101: 2021-08-21 19:55:32 INFO: starting final sync /proc/1088/root/ to /var/lib/vz-raid//dump/vzdump-lxc-101-2021_08_21-19_52_57.tmp
101: 2021-08-21 19:55:39 INFO: resume vm
101: 2021-08-21 19:55:39 INFO: guest is online again after 9 seconds
101: 2021-08-21 19:55:51 ERROR: Backup of VM 101 failed - command 'rsync --stats -h -X -A --numeric-ids -aH --delete --no-whole-file --inplace --one-file-system --relative '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' /proc/1088/root//./ /var/lib/vz-raid//dump/vzdump-lxc-101-2021_08_21-19_52_57.tmp' failed: exit code 23
The error line : 101: 2021-08-21 19:55:51 ERROR: Backup of VM 101 failed - command 'rsync --stats -h -X -A --numeric-ids -aH --delete --no-whole-file --inplace --one-file-system --relative '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' /proc/1088/root//./ /var/lib/vz-raid//dump/vzdump-lxc-101-2021_08_21-19_52_57.tmp' failed: exit code 23
The storage is local and there is more than 1 TB of space for info
I tested the other backup modes:
- snapshot (the one I use by default and which worked until this week)
- suspend (same result on these two vms -> KO)
- stop (backup OK on these two vms but stop unwanted for vm in production)
Actions tested but unsuccessful
- I stopped the vms
- I rebooted the proxmox server
- I tested to another storage (nas on nfs)
The only thing I see is the upgrade from debian 10 to debian 11 and from proxmox 6 to promox 7 which could be the cause of problems... but why only on these two containers and the other 20 are OK
If anyone has an idea or a test that I could still do, or who has already encountered the problem, I am interested in your ideas and your feedback
Thanks