Progress on LXC backups

krikey

Well-Known Member
Aug 15, 2018
113
21
58
UK
I'm running the first backup on an 400GB LXC. This starts ok as I can see traffic across the PVE node to PBS as what I would expect, however after a while the traffic dies down to mere KBs and the backup is still running.

Being an LXC, there is no percentage shown in the task window (this only seems to be visible when backing up a VM). Is there a way to monitor how much of the backup has happened other than using the network graph or a tool such as bmon so that I can see if the backup has stalled somehow?

Perhaps some kind of backup file size so far?
 
  • Like
Reactions: Fathi
An update: The 400GB LXC backup finished some 12 hours later but the reported size is 9TB which clearly is incorrect.

I checked to see if there were any external mount points within the LXC (which there are none) and on running df -h within the LXC this is the output:

Code:
Filesystem                           Size  Used Avail Use% Mounted on
/dev/mapper/vgpool-vm--105--disk--1  393G  157G  218G  42% /
none                                 492K  4.0K  488K   1% /dev
tmpfs                                 32G     0   32G   0% /dev/shm
tmpfs                                 32G   81M   32G   1% /run
tmpfs                                5.0M   20K  5.0M   1% /run/lock
tmpfs                                 32G     0   32G   0% /sys/fs/cgroup
tmpfs                                6.3G     0  6.3G   0% /run/user/0

I'll try running a non PBS backups to an NFS Mount and report back if there is a size difference.
 
This is the task log from the PBS LXC backup:

Code:
020-09-24T18:17:54.069506593+01:00: starting new backup on datastore 'backup': "ct/105/2020-09-24T17:17:50Z"
2020-09-24T18:17:54.070230362+01:00: GET /previous: 400 Bad Request: no previous backup
2020-09-24T18:17:54.071226429+01:00: add blob "/mnt/backup/ct/105/2020-09-24T17:17:50Z/pct.conf.blob" (273 bytes, comp: 273)
2020-09-24T18:17:54.071899593+01:00: add blob "/mnt/backup/ct/105/2020-09-24T17:17:50Z/fw.conf.blob" (34 bytes, comp: 34)
2020-09-24T18:17:54.072335161+01:00: created new dynamic index 1 ("ct/105/2020-09-24T17:17:50Z/catalog.pcat1.didx")
2020-09-24T18:17:54.072407176+01:00: created new dynamic index 2 ("ct/105/2020-09-24T17:17:50Z/root.pxar.didx")
2020-09-25T06:26:48.058259936+01:00: Upload statistics for 'root.pxar.didx'
2020-09-25T06:26:48.058286287+01:00: UUID: 6aa2b0979e854d2883d1235f4df9c989
2020-09-25T06:26:48.058296058+01:00: Checksum: 6fe9b1658353fc6a2e83ec87c9bc2da403b549d16a6fcdafc062de8b75611bed
2020-09-25T06:26:48.058303707+01:00: Size: 9924546421879
2020-09-25T06:26:48.058311008+01:00: Chunk count: 616751
2020-09-25T06:26:48.058318396+01:00: Upload size: 158002785513 (1%)
2020-09-25T06:26:48.058325888+01:00: Duplicates: 583867+16727 (97%)
2020-09-25T06:26:48.058332948+01:00: Compression: 53%
2020-09-25T06:26:48.058351216+01:00: successfully closed dynamic index 2
2020-09-25T06:26:48.067000632+01:00: Upload statistics for 'catalog.pcat1.didx'
2020-09-25T06:26:48.067016218+01:00: UUID: c77bd5605b4941a9856c8baaa03a9c2e
2020-09-25T06:26:48.067025278+01:00: Checksum: d9bc314f200ebc22f85d2d1708d62e3008eaec12ab64c680794142bd3b877217
2020-09-25T06:26:48.067032945+01:00: Size: 63355766
2020-09-25T06:26:48.067040531+01:00: Chunk count: 109
2020-09-25T06:26:48.067047978+01:00: Upload size: 63355766 (100%)
2020-09-25T06:26:48.067055365+01:00: Duplicates: 0+22 (20%)
2020-09-25T06:26:48.067062805+01:00: Compression: 33%
2020-09-25T06:26:48.067101798+01:00: successfully closed dynamic index 1
2020-09-25T06:26:48.067732655+01:00: add blob "/mnt/backup/ct/105/2020-09-24T17:17:50Z/index.json.blob" (425 bytes, comp: 425)
2020-09-25T06:26:48.068358049+01:00: successfully finished backup
2020-09-25T06:26:48.068757663+01:00: backup finished successfully
2020-09-25T06:26:48.068787088+01:00: TASK OK
 
could you add the log from the vzdump/PVE side as well?
 
Code:
INFO: starting new backup job: vzdump 105 --storage PBS --mode snapshot --node is-74828 --remove 0
INFO: Starting Backup of VM 105 (lxc)
INFO: Backup started at 2020-09-24 18:17:50
INFO: status = running
INFO: CT Name: websrv05
INFO: including mount point rootfs ('/') in backup
INFO: found old vzdump snapshot (force removal)
  Logical volume "snap_vm-105-disk-1_vzdump" successfully removed
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
  WARNING: You have not turned on protection against thin pools running out of space.
 WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "snap_vm-105-disk-1_vzdump" created.
 WARNING: Sum of all thin volume sizes (2.13 TiB) exceeds the size of thin pool vgpool/container and the size of whole volume group (<1.82 TiB).
INFO: creating Proxmox Backup Server archive 'ct/105/2020-09-24T17:17:50Z'
INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp24197/etc/vzdump/pct.conf fw.conf:/var/tmp/vzdumptmp24197/etc/vzdump/pct.fw root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 105 --backup-time 1600967870 --repository root@pam@10.0.0.1:backup
INFO: Starting backup: ct/105/2020-09-24T17:17:50Z
INFO: Client name: is-74828
INFO: Starting protocol: 2020-09-24T18:17:54+01:00
INFO: Upload config file '/var/tmp/vzdumptmp24197/etc/vzdump/pct.conf' to 'root@pam@10.0.0.1:backup' as pct.conf.blob
INFO: Upload config file '/var/tmp/vzdumptmp24197/etc/vzdump/pct.fw' to 'root@pam@10.0.0.1:backup' as fw.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@10.0.0.1:backup' as root.pxar.didx
INFO: root.pxar: had to upload 147.15 GiB of 9.03 TiB in 43733.99s, average speed 3.45 MiB/s).
INFO: root.pxar: backup was done incrementally, reused 8.88 TiB (98.4%)
INFO: Uploaded backup catalog (60.42 MiB)
INFO: Duration: PT43734.048018169S
INFO: End Time: 2020-09-25T06:26:48+01:00
INFO: remove vzdump snapshot
  Logical volume "snap_vm-105-disk-1_vzdump" successfully removed
INFO: Finished Backup of VM 105 (12:09:05)
INFO: Backup finished at 2020-09-25 06:26:55
INFO: Backup job finished successfully
TASK OK
 
is there anything special about this container? massive amounts of hardlinks? number of empty/really small files? pxar seems to generate 9TB of data, but most of that gets de-duplicated away again on the client side..
 
Not that I can see.

I ran the command find / -type f -printf '%n %p\n' | awk '$1 > 1{$1="";print}' to list all hardlinks on this LXC

and it returned a list of around 3100 files.

Using the command
Code:
find / -type f -empty
this returned 247,000 files

Are there any other commands I could run to identify any potentiasl issues?
 
If it helps, I did have a NFS mount that was around 9TB mounted from within Proxmox GUI and mapped to /mnt/nfsbackup which I had to manually remove from the PVE CT .conf file as after trying to remove it using the GUI, shutting down the CT and restarting, the mount was still showing in the GUI and couldn't be deleted.

Could this be a remnant of that?
 
If it helps, I did have a NFS mount that was around 9TB mounted from within Proxmox GUI and mapped to /mnt/nfsbackup which I had to manually remove from the PVE CT .conf file as after trying to remove it using the GUI, shutting down the CT and restarting, the mount was still showing in the GUI and couldn't be deleted.

Could this be a remnant of that?

that sounds plausible..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!