LXC backup stuck on "starting final sync"

Por12

Member
Mar 6, 2023
40
3
8
Hi,

I'm having issues with a new LXC I have created. I am trying to back-it up to a HDD connected to the same machine. The backup process will start fine and finish the file sync, but will get stuck on this part: INFO: starting final sync /proc/31416/root/ to /var/tmp/vzdumptmp1232345_304.

Here's an example. In this particular case I cancelled the backup but yesterday it was stuck in the step for more than 4h (until I woke up and cancelled it).

Any ideas on what could be happening?

Thanks

Code:
INFO: starting new backup job: vzdump 304 --compress zstd --mode snapshot --remove 0 --notes-template '{{guestname}}' --node zeus --storage local-backups-ferraz
INFO: filesystem type on dumpdir is 'zfs' -using /var/tmp/vzdumptmp1232345_304 for temporary files
INFO: Starting Backup of VM 304 (lxc)
INFO: Backup started at 2023-09-14 13:07:27
INFO: status = running
INFO: CT Name: frigate
INFO: including mount point rootfs ('/') in backup
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: CT Name: frigate
INFO: including mount point rootfs ('/') in backup
INFO: starting first sync /proc/31416/root/ to /var/tmp/vzdumptmp1232345_304
INFO: first sync finished - transferred 3.34G bytes in 29s
INFO: suspending guest
INFO: starting final sync /proc/31416/root/ to /var/tmp/vzdumptmp1232345_304
INFO: resume vm
INFO: guest is online again after 14 seconds
ERROR: Backup of VM 304 failed - command 'rsync --stats -h -X -A --numeric-ids -aH --delete --no-whole-file --sparse --one-file-system --relative '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' /proc/31416/root//./ /var/tmp/vzdumptmp1232345_304' failed: interrupted by signal
INFO: Failed at 2023-09-14 13:08:12
INFO: Backup job finished with errors
TASK ERROR: job errors
 
I am experiencing the same on a (just now created) new LXC container.

Running latest PVE8:
Code:
proxmox-ve: 8.0.2 (running kernel: 6.2.16-15-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
pve-kernel-5.15: 7.4-4
pve-kernel-5.13: 7.1-9
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
proxmox-kernel-6.2: 6.2.16-15
proxmox-kernel-6.2.16-14-pve: 6.2.16-14
pve-kernel-6.2.16-5-pve: 6.2.16-6
pve-kernel-5.15.108-1-pve: 5.15.108-2
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.60-2-pve: 5.15.60-2
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-2-pve: 5.15.39-2
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-5-pve: 5.13.19-13
pve-kernel-5.13.19-4-pve: 5.13.19-9
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
gfs2-utils: 3.5.0-2
glusterfs-client: 10.3-5
ifupdown: not correctly installed
ifupdown2: 3.2.0-1+pmx5
libjs-extjs: 7.0.0-4
libknet1: 1.26-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.5
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.9
libpve-guest-common-perl: 5.0.5
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.3-1
proxmox-backup-file-restore: 3.0.3-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.9
pve-cluster: 8.0.4
pve-container: 5.0.4
pve-docs: 8.0.5
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.8-2
pve-ha-manager: 4.0.2
pve-i18n: 3.0.7
pve-qemu-kvm: 8.0.2-6
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.13-pve1

I aborted the backup on the new LXC, but the same behaviour was seen:
Code:
INFO: status = running
INFO: CT Name: docker.mydomain.tld
INFO: including mount point rootfs ('/') in backup
INFO: mode failure - some volumes do not support snapshots
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: CT Name: docker.mydomain.tld
INFO: including mount point rootfs ('/') in backup
INFO: starting first sync /proc/1492511/root/ to /var/tmp/vzdumptmp1633458_1006
INFO: first sync finished - transferred 1.45G bytes in 18s
INFO: suspending guest
INFO: starting final sync /proc/1492511/root/ to /var/tmp/vzdumptmp1633458_1006
INFO: resume vm
INFO: guest is online again after 1925 seconds
ERROR: Backup of VM 1006 failed - command 'rsync --stats -h -X -A --numeric-ids -aH --delete --no-whole-file --sparse --one-file-system --relative '--exclude=/tmp/?*' '--exclude=/var/tmp/?*' '--exclude=/var/run/?*.pid' /proc/1492511/root//./ /var/tmp/vzdumptmp1633458_1006' failed: interrupted by signal
INFO: Failed at 2023-10-14 01:06:34
INFO: filesystem type on dumpdir is 'gfs2' -using /var/tmp/vzdumptmp1633458_1008 for temporary files
 
Turns out it was the FUSE feature being in use on the LXC.
As soon as i took it off - or downed the affected host - after reading Forum posts and docs and the issues with it - backups went fine.

For reference as to where i got my answers : This forum Post

In my case we are talking about a docker LXC based on a RockyLinux 9.2 template.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!