Huge size of LXC backup using Proxmox Backup

oleg.v

Member
Jan 4, 2022
13
0
6
39
Environment:
Bash:
$ pveversion
pve-manager/6.4-13/9f411e79 (running kernel: 5.4.143-1-pve)

# proxmox-backup-manager versions
proxmox-backup-server 2.1.2-1 running version: 2.1.2

Replication process (from one pve node to another):
The size is about 1.29G and the time ~20 seconds.
Code:
2022-01-04 11:53:35 shutdown CT 502
2022-01-04 11:53:37 starting migration of CT 502 to node 'app3' (10.110.1.23)
2022-01-04 11:53:38 found local volume 'lxc-disks:subvol-502-disk-0' (in current VM config)
2022-01-04 11:53:39 full send of main-pool/lxc-disks/subvol-502-disk-0@__migration__ estimated size is 1.29G
2022-01-04 11:53:39 total estimated size is 1.29G
2022-01-04 11:53:41 TIME        SENT   SNAPSHOT main-pool/lxc-disks/subvol-502-disk-0@__migration__
2022-01-04 11:53:41 11:53:41   70.4M   main-pool/lxc-disks/subvol-502-disk-0@__migration__
2022-01-04 11:53:42 11:53:42    485M   main-pool/lxc-disks/subvol-502-disk-0@__migration__
2022-01-04 11:53:43 11:53:43    809M   main-pool/lxc-disks/subvol-502-disk-0@__migration__
2022-01-04 11:53:44 11:53:44   1.12G   main-pool/lxc-disks/subvol-502-disk-0@__migration__
2022-01-04 11:53:49 successfully imported 'lxc-disks:subvol-502-disk-0'
2022-01-04 11:53:51 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=app3' root@10.110.1.23 pvesr set-state 502 \''{}'\'
2022-01-04 11:53:52 start final cleanup
2022-01-04 11:53:53 start container on target node
2022-01-04 11:53:53 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=app3' root@10.110.1.23 pct start 502
2022-01-04 11:53:56 migration finished successfully (duration 00:00:21)
TASK OK


Backup process:
The size is about 127.29 GiB and the time ~390 seconds.
Code:
Proxmox
Virtual Environment 6.4-13
Container 502 (ipa-test) on node 'app4'
Filter VMID
Logs
()
INFO: starting new backup job: vzdump 502 --storage backup1-private-all --mode snapshot --node app4 --remove 0
INFO: Starting Backup of VM 502 (lxc)
INFO: Backup started at 2022-01-04 15:48:00
INFO: status = running
INFO: CT Name: ipa-test
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
INFO: creating Proxmox Backup Server archive 'ct/502/2022-01-04T15:48:00Z'
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp33309_502/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 502 --backup-time 1641311280 --repository pbs_backup_user@pbs@backup1.myserver.com:private-all
INFO: Starting backup: ct/502/2022-01-04T15:48:00Z
INFO: Client name: app4
INFO: Starting backup protocol: Tue Jan  4 15:48:00 2022
INFO: No previous manifest available.
INFO: Upload config file '/var/tmp/vzdumptmp33309_502/etc/vzdump/pct.conf' to 'pbs_backup_user@pbs@backup1.myserver.com:8007:private-all' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'pbs_backup_user@pbs@backup1.myserver.com:8007:private-all' as root.pxar.didx
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
INFO: failed to open file: ".config": access denied
INFO: failed to open file: ".local": access denied
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
INFO: failed to open file: ".config": access denied
INFO: failed to open file: ".lesshst": access denied
INFO: failed to open file: ".local": access denied
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
INFO: root.pxar: had to backup 1.24 GiB of 128.53 GiB (compressed 391.15 MiB) in 389.49s
INFO: root.pxar: average backup speed: 3.26 MiB/s
INFO: root.pxar: backup was done incrementally, reused 127.29 GiB (99.0%)
INFO: Uploaded backup catalog (700.56 KiB)
INFO: Duration: 389.52s
INFO: End Time: Tue Jan  4 15:54:30 2022
INFO: cleanup temporary 'vzdump' snapshot
INFO: Finished Backup of VM 502 (00:06:32)
INFO: Backup finished at 2022-01-04 15:54:32
Result: {
  "data": null
}
INFO: Backup job finished successfully
TASK OK


When the container disk is just a block device or file in directory or nfs everything ok.


Container disk configuration:

Code:
$ df -h
Filesystem                             Size  Used Avail Use% Mounted on
main-pool/lxc-disks/subvol-502-disk-0  8.0G  1.4G  6.7G  18% /
none                                   492K  4.0K  488K   1% /dev
udev                                   126G     0  126G   0% /dev/tty
tmpfs                                  126G     0  126G   0% /dev/shm
tmpfs                                   26G  120K   26G   1% /run
tmpfs                                  5.0M     0  5.0M   0% /run/lock
tmpfs                                  126G     0  126G   0% /sys/fs/cgroup

1641314223043.png


Result on the backup server:
1641314555492.png




ZFS dataset configuration:
Code:
$ zfs get all main-pool/lxc-disks/subvol-502-disk-0
NAME                                   PROPERTY              VALUE                                   SOURCE
main-pool/lxc-disks/subvol-502-disk-0  type                  filesystem                              -
main-pool/lxc-disks/subvol-502-disk-0  creation              Tue Jan  4 14:23 2022                   -
main-pool/lxc-disks/subvol-502-disk-0  used                  1.37G                                   -
main-pool/lxc-disks/subvol-502-disk-0  available             6.63G                                   -
main-pool/lxc-disks/subvol-502-disk-0  referenced            1.37G                                   -
main-pool/lxc-disks/subvol-502-disk-0  compressratio         1.00x                                   -
main-pool/lxc-disks/subvol-502-disk-0  mounted               yes                                     -
main-pool/lxc-disks/subvol-502-disk-0  quota                 none                                    default
main-pool/lxc-disks/subvol-502-disk-0  reservation           none                                    default
main-pool/lxc-disks/subvol-502-disk-0  recordsize            128K                                    default
main-pool/lxc-disks/subvol-502-disk-0  mountpoint            /main-pool/lxc-disks/subvol-502-disk-0  inherited from main-pool/lxc-disks
main-pool/lxc-disks/subvol-502-disk-0  sharenfs              off                                     default
main-pool/lxc-disks/subvol-502-disk-0  checksum              on                                      default
main-pool/lxc-disks/subvol-502-disk-0  compression           off                                     inherited from main-pool
main-pool/lxc-disks/subvol-502-disk-0  atime                 off                                     inherited from main-pool
main-pool/lxc-disks/subvol-502-disk-0  devices               on                                      default
main-pool/lxc-disks/subvol-502-disk-0  exec                  on                                      default
main-pool/lxc-disks/subvol-502-disk-0  setuid                on                                      default
main-pool/lxc-disks/subvol-502-disk-0  readonly              off                                     default
main-pool/lxc-disks/subvol-502-disk-0  zoned                 off                                     default
main-pool/lxc-disks/subvol-502-disk-0  snapdir               hidden                                  default
main-pool/lxc-disks/subvol-502-disk-0  aclmode               discard                                 default
main-pool/lxc-disks/subvol-502-disk-0  aclinherit            restricted                              default
main-pool/lxc-disks/subvol-502-disk-0  createtxg             5085903                                 -
main-pool/lxc-disks/subvol-502-disk-0  canmount              on                                      default
main-pool/lxc-disks/subvol-502-disk-0  xattr                 sa                                      received
main-pool/lxc-disks/subvol-502-disk-0  copies                1                                       default
main-pool/lxc-disks/subvol-502-disk-0  version               5                                       -
main-pool/lxc-disks/subvol-502-disk-0  utf8only              off                                     -
main-pool/lxc-disks/subvol-502-disk-0  normalization         none                                    -
main-pool/lxc-disks/subvol-502-disk-0  casesensitivity       sensitive                               -
main-pool/lxc-disks/subvol-502-disk-0  vscan                 off                                     default
main-pool/lxc-disks/subvol-502-disk-0  nbmand                off                                     default
main-pool/lxc-disks/subvol-502-disk-0  sharesmb              off                                     default
main-pool/lxc-disks/subvol-502-disk-0  refquota              8G                                      received
main-pool/lxc-disks/subvol-502-disk-0  refreservation        none                                    default
main-pool/lxc-disks/subvol-502-disk-0  guid                  2969520457669874753                     -
main-pool/lxc-disks/subvol-502-disk-0  primarycache          all                                     default
main-pool/lxc-disks/subvol-502-disk-0  secondarycache        all                                     default
main-pool/lxc-disks/subvol-502-disk-0  usedbysnapshots       272K                                    -
main-pool/lxc-disks/subvol-502-disk-0  usedbydataset         1.37G                                   -
main-pool/lxc-disks/subvol-502-disk-0  usedbychildren        0B                                      -
main-pool/lxc-disks/subvol-502-disk-0  usedbyrefreservation  0B                                      -
main-pool/lxc-disks/subvol-502-disk-0  logbias               latency                                 default
main-pool/lxc-disks/subvol-502-disk-0  objsetid              20001                                   -
main-pool/lxc-disks/subvol-502-disk-0  dedup                 off                                     default
main-pool/lxc-disks/subvol-502-disk-0  mlslabel              none                                    default
main-pool/lxc-disks/subvol-502-disk-0  sync                  standard                                inherited from main-pool
main-pool/lxc-disks/subvol-502-disk-0  dnodesize             auto                                    inherited from main-pool
main-pool/lxc-disks/subvol-502-disk-0  refcompressratio      1.00x                                   -
main-pool/lxc-disks/subvol-502-disk-0  written               272K                                    -
main-pool/lxc-disks/subvol-502-disk-0  logicalused           1.29G                                   -
main-pool/lxc-disks/subvol-502-disk-0  logicalreferenced     1.29G                                   -
main-pool/lxc-disks/subvol-502-disk-0  volmode               default                                 default
main-pool/lxc-disks/subvol-502-disk-0  filesystem_limit      none                                    default
main-pool/lxc-disks/subvol-502-disk-0  snapshot_limit        none                                    default
main-pool/lxc-disks/subvol-502-disk-0  filesystem_count      none                                    default
main-pool/lxc-disks/subvol-502-disk-0  snapshot_count        none                                    default
main-pool/lxc-disks/subvol-502-disk-0  snapdev               hidden                                  default
main-pool/lxc-disks/subvol-502-disk-0  acltype               posix                                   local
main-pool/lxc-disks/subvol-502-disk-0  context               none                                    default
main-pool/lxc-disks/subvol-502-disk-0  fscontext             none                                    default
main-pool/lxc-disks/subvol-502-disk-0  defcontext            none                                    default
main-pool/lxc-disks/subvol-502-disk-0  rootcontext           none                                    default
main-pool/lxc-disks/subvol-502-disk-0  relatime              on                                      inherited from main-pool
main-pool/lxc-disks/subvol-502-disk-0  redundant_metadata    all                                     default
main-pool/lxc-disks/subvol-502-disk-0  overlay               on                                      default
main-pool/lxc-disks/subvol-502-disk-0  encryption            off                                     default
main-pool/lxc-disks/subvol-502-disk-0  keylocation           none                                    default
main-pool/lxc-disks/subvol-502-disk-0  keyformat             none                                    default
main-pool/lxc-disks/subvol-502-disk-0  pbkdf2iters           0                                       default
main-pool/lxc-disks/subvol-502-disk-0  special_small_blocks  0                                       default


I'm also worrying about the number of error messages (why did they appear):
Code:
INFO: Upload directory '/mnt/vzsnap0' to 'pbs_backup_user@pbs@backup1.myserver.com:8007:private-all' as root.pxar.didx
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
INFO: failed to open file: ".config": access denied
INFO: failed to open file: ".local": access denied
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
INFO: failed to open file: ".config": access denied
INFO: failed to open file: ".lesshst": access denied
INFO: failed to open file: ".local": access denied
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
 
Last edited:
Hi,
my guess is that you have a large sparse file within the container.
Code:
INFO: root.pxar: had to backup 1.24 GiB of 128.53 GiB (compressed 391.15 MiB) in 389.49s
INFO: root.pxar: average backup speed: 3.26 MiB/s
INFO: root.pxar: backup was done incrementally, reused 127.29 GiB (99.0%)
INFO: Uploaded backup catalog (700.56 KiB)
INFO: Duration: 389.52s
As you can see it only had to backup 1.24GiB, but I think proxmox-backup-client is currently not handling sparse files efficiently, so it read the whole file, which took its time.

As for the access errors, what are the permissions on these files?
 
But it doesn't happen for replication.
Couldn't it be that the backup take into account some of these disks ?

Code:
udev                                   126G     0  126G   0% /dev/tty
tmpfs                                  126G     0  126G   0% /dev/shm
tmpfs                                   26G  120K   26G   1% /run
tmpfs                                  5.0M     0  5.0M   0% /run/lock
tmpfs                                  126G     0  126G   0% /sys/fs/cgroup

What do you mean talking "sparse file" ?
Also take into account that "zfs get all" says only bout 1.37GB.
It's a simple new container for test. Nothing special.

What can i do to solve the problem?

------------
 
Last edited:
And what about file permissions. Why does it matter for backup? These files are inside the container and have permissions for user inside the container itself.
 
Last edited:
But it doesn't happen for replication.
Couldn't it be that the backup take into account some of these disks ?

Code:
udev                                   126G     0  126G   0% /dev/tty
tmpfs                                  126G     0  126G   0% /dev/shm
tmpfs                                   26G  120K   26G   1% /run
tmpfs                                  5.0M     0  5.0M   0% /run/lock
tmpfs                                  126G     0  126G   0% /sys/fs/cgroup
No, these shouldn't be included in the backup.

It's a simple new container for test. Nothing special.

What can i do to solve the problem?
The real solution is to improve handling of sparse files. Feel free to open a feature request on our bug tracker.

A workaround is to remove or re-create the problematic file(s) as non-sparse. To find it, you can try to use:
Code:
find /main-pool/lxc-disks/subvol-502-disk-0 -type f -printf "%S\t%p\n" | awk '$1 < 0.5 {print}'
with the first column being the ratio of used space.

Example from my machine:
Code:
root@pve701 ~ # find /myzpool/subvol-128-disk-0/ -type f -printf "%S\t%p\n" | awk '$1 < 0.5 {print}'
0.257143        /myzpool/subvol-128-disk-0/lib/apk/db/scripts.tar
0.484555        /myzpool/subvol-128-disk-0/lib/libeinfo.so.1
0.474695        /myzpool/subvol-128-disk-0/lib/rc/bin/kill_all
0.331034        /myzpool/subvol-128-disk-0/lib/rc/bin/shell_var
0.325792        /myzpool/subvol-128-disk-0/lib/rc/sbin/rc-abort
0.482509        /myzpool/subvol-128-disk-0/lib/rc/sbin/swclock
0.327645        /myzpool/subvol-128-disk-0/bin/bbsuid
0.485732        /myzpool/subvol-128-disk-0/usr/bin/iconv
0.446606        /myzpool/subvol-128-disk-0/usr/bin/getconf
0.4828  /myzpool/subvol-128-disk-0/usr/lib/engines-1.1/padlock.so
0.332564        /myzpool/subvol-128-disk-0/usr/lib/engines-1.1/capi.so
0.329331        /myzpool/subvol-128-disk-0/sbin/mkmntdirs
4.76837e-09     /myzpool/subvol-128-disk-0/root/sparse
root@pve701 ~ # ls -lh /myzpool/subvol-128-disk-0/root/sparse
-rw-r--r-- 1 100000 100000 100G Jan  5 09:09 /myzpool/subvol-128-disk-0/root/sparse
root@pve701 ~ # du -h /myzpool/subvol-128-disk-0/root/sparse
512     /myzpool/subvol-128-disk-0/root/sparse

And what about file permissions. Why does it matter for backup? These files are inside the container and have permissions for user inside the container itself.
Because the backup is taken on the file-system level.
 
Great thanks for the help and your product!
The culprit was '/var/log/lastlog'. `ls` said that the space occupied by that file is ~130GB.
I had to just truncate the lastlog on all machines and containers.
 
According to the user permissions for backup.
What user is responsible for that on the pve nodes?
Do i need to replicate my user intended for backups also on the pve nodes?
E.g. my user on the Proxmox Backup is 'pbs_backup_user', so, i have to recreate this account on pve?
 
On Proxmox VE, backups are created by root. There is no inherent user management specifically for backups. The access control happens via Proxmox VE's API and user system (e.g. user can only access backup for VMs they can control and storages they can access).
 
Do these errors require some fixing or not?
As i understood the backup on PVE is made by root.

Code:
INFO: Upload directory '/mnt/vzsnap0' to 'pbs_backup_user@pbs@backup1.myserver.com:8007:private-all' as root.pxar.didx
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
INFO: failed to open file: ".config": access denied
INFO: failed to open file: ".local": access denied
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
INFO: failed to open file: ".config": access denied
INFO: failed to open file: ".lesshst": access denied
INFO: failed to open file: ".local": access denied
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied

Here are the permissions for the user 'pbs_backup_user@pbs' on backup1.myserver.com.
1642581260344.png
 
Well, the files probably won't be in the backup if they cannot be accessed, but please check if they are! Maybe copying them still works and the error comes from some other access.

The permissions on the Proxmox Backup Server side shouldn't matter here. it uses chunks to represent files and metadata and doesn't copy them directly.

Could you try the following:
Code:
zfs snapshot main-pool/lxc-disks/subvol-502-disk-0@__testing__
mount -o ro -o noatime,acl -t zfs main-pool/lxc-disks/subvol-502-disk-0@__testing__ /some/mount/point
Can you access the mounted files? What are the owner/permissions/ACLs for the problematic files?

To unmount, and destroy the snapshot again afterwards use
Code:
umount /some/mount/point
zfs destroy main-pool/lxc-disks/subvol-502-disk-0@__testing__
 
According to the problem with file permissions.

Could be the problem in this line?

Code:
run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp534436_3023/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 3023 --backup-time 1650358777 --repository pbs_backup_user@pbs@backup1.myhost.com:test-machine

Especially in this part: -m u:0:100000:65536 -m g:0:100000:65536

What I mean:
In the file /usr/share/lxc/config/common.conf were set additional configuration for idmap, like this:
Code:
lxc.idmap = u 468000000 1000000000 1000000
lxc.idmap = g 468000000 1000000000 1000000

Inside LXC for user authentication the IPA is used and the file's uid/gid look like:
Code:
$ ls -alhn /home
total 45K
drwxr-xr-x  5         0         0  5 Apr 19 08:57 .
drwxr-xr-x 22         0         0 22 Feb  4 17:15 ..
drwxr-xr-x  4      1000      1000 10 May 14  2021 user1
drwxr-xr-x  3 468000003 468000000  8 Jan 13 09:53 user2
drwxr-xr-x  3 468000021 468000000  7 Feb  4 16:39 user3


In general:
Could it be that the backup command should include custom mapping (idmap) during backup?
Something like:
run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -m u:468000000:1000000000:1000000 -m g:468000000:1000000000:1000000
 
According to the problem with file permissions.

Could be the problem in this line?

Code:
run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp534436_3023/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 3023 --backup-time 1650358777 --repository pbs_backup_user@pbs@backup1.myhost.com:test-machine

Especially in this part: -m u:0:100000:65536 -m g:0:100000:65536

What I mean:
In the file /usr/share/lxc/config/common.conf were set additional configuration for idmap, like this:
Code:
lxc.idmap = u 468000000 1000000000 1000000
lxc.idmap = g 468000000 1000000000 1000000
It seems like we might only consider the Proxmox VE-specific container configuration file (i.e. /etc/pve/lxc/<ID>.conf) when getting the ID maps for our commands. Could you try adding the lxc.idmap settings to that file directly (you might also need lines for the mappings -m u:0:100000:65536 -m g:0:100000:65536) and see if it works? If so, please open a feature request on our bug tracker linking back to this forum thread, so the issue can be tracked centrally.

Inside LXC for user authentication the IPA is used and the file's uid/gid look like:
Code:
$ ls -alhn /home
total 45K
drwxr-xr-x  5         0         0  5 Apr 19 08:57 .
drwxr-xr-x 22         0         0 22 Feb  4 17:15 ..
drwxr-xr-x  4      1000      1000 10 May 14  2021 user1
drwxr-xr-x  3 468000003 468000000  8 Jan 13 09:53 user2
drwxr-xr-x  3 468000021 468000000  7 Feb  4 16:39 user3


In general:
Could it be that the backup command should include custom mapping (idmap) during backup?
Something like:
run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -m u:468000000:1000000000:1000000 -m g:468000000:1000000000:1000000
 
I've tried to add these lines:

Code:
lxc.idmap: u 468000000 1000000000 1000000
lxc.idmap: g 468000000 1000000000 1000000
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 65536

The entire content of the file:
Code:
$ sudo cat /etc/pve/lxc/503.conf
arch: amd64
cores: 1
features: nesting=1
hostname: test-tmp
memory: 512
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=4A:5B:1F:64:96:6E,ip=dhcp,tag=103,type=veth
ostype: ubuntu
rootfs: lxc-disks:subvol-503-disk-0,acl=1,mountoptions=noatime,replicate=0,size=8G
swap: 512
unprivileged: 1

lxc.idmap: u 468000000 1000000000 1000000
lxc.idmap: g 468000000 1000000000 1000000
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 65536


But the container refused to start at all with the error:
Code:
lxc_map_ids: 3663 newuidmap failed to write mapping "newuidmap: write to uid_map failed: Invalid argument": newuidmap 4049536 468000000 1000000000 1000000 468000000 1000000000 1000000 0 100000 65536
lxc_spawn: 1785 Failed to set up id mapping.
__lxc_start: 2068 Failed to spawn container "503"
TASK ERROR: startup for container '503' failed

So, it looks like the pve adds lines from /usr/share/lxc/config/common.conf automatically.
 
Last edited:
Hi,
Trying to restore lxc from a backup and getting an error

Error: Error extracting archive - Error writing '.Xauthority': Failed to set owner: Invalid argument (OS error 22) TASK ERROR: unable to restore CT 556 - command '[B]lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536[/B] -- /usr/bin/proxmox-backup-client restore '--crypt-mode= none 'ct/4017/2022-07-22T10:39:18Z root.pxar /var/lib/lxc/556/rootfs --allow-existing-dirs --repository pbs_backup_user@pbs@backup2.us..com: private -all' failed: exit code 255

when creating a backup, this command is generated

INFO: run: [B]lxc-usernsexec -m u:468000000:1000000000:10000000 -m g:468000000:1000000000:1000000 -m u:0:100000:65536 -m g:0:100000:65536 [/B]-- /usr/bin/up-back back/proxmox client backup --crypt-mode=none pct.conf: /var/tmp/vzdumptmp2560103_4017/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/. / --skip -lost-and-found --exclude=/var/log/lastlog --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/? *.pid - -backup-type ct --backup-id 4017 --backup-time 1658486358 --repository pbs_backup_user@pbs@backup2.us.com:private-all INFO: Running backup: ct/4017/2022-07-22T10:39:18Z

how to add args -m u:468000000:1000000000:1000000 -m g:468000000:1000000000:1000000 to restore command

I think this is the problem
 
Last edited:
Hi,
Hi,
Trying to restore lxc from a backup and getting an error

Error: Error extracting archive - Error writing '.Xauthority': Failed to set owner: Invalid argument (OS error 22) TASK ERROR: unable to restore CT 556 - command '[B]lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536[/B] -- /usr/bin/proxmox-backup-client restore '--crypt-mode= none 'ct/4017/2022-07-22T10:39:18Z root.pxar /var/lib/lxc/556/rootfs --allow-existing-dirs --repository pbs_backup_user@pbs@backup2.us.backendless.com: private -all' failed: exit code 255

when creating a backup, this command is generated

INFO: run: [B]lxc-usernsexec -m u:468000000:1000000000:10000000 -m g:468000000:1000000000:1000000 -m u:0:100000:65536 -m g:0:100000:65536 [/B]-- /usr/bin/up-back back/proxmox client backup --crypt-mode=none pct.conf: /var/tmp/vzdumptmp2560103_4017/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/. / --skip -lost-and-found --exclude=/var/log/lastlog --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/? *.pid - -backup-type ct --backup-id 4017 --backup-time 1658486358 --repository pbs_backup_user@pbs@backup2.us.backendless.com:private-all INFO: Running backup: ct/4017/2022-07-22T10:39:18Z

how to add args -m u:468000000:1000000000:1000000 -m g:468000000:1000000000:1000000 to restore command

I think this is the problem
please post the configuration included in the backup, the output of pveversion -v and the full log for the restore task. Are you root@pam when you try to restore the container or a different user?
 
Hi,

please post the configuration included in the backup, the output of pveversion -v and the full log for the restore task. Are you root@pam when you try to restore the container or a different user?


viktor.kalenyk@mb3:~$ sudo cat /etc/vzdump.conf # vzdump default settings #tmpdir: DIR #dumpdir: DIR #storage: STORAGE_ID #mode: snapshot|suspend|stop #bwlimit: KBPS #ionice: PRI #lockwait: MINUTES #stopwait: MINUTES #stdexcludes: BOOLEAN #mailto: ADDRESSLIST #prune-backups: keep-INTERVAL=N[,...] #script: FILENAME #exclude-path: PATHLIST #pigz: N exclude-path: "/var/log/lastlog" viktor.kalenyk@mb3:~$


remove from `/usr/share/lxc/config/common.conf` lines lxc.idmap: u 468000000 1000000000 1000000 lxc.idmap: g 468000000 1000000000 1000000 `lxc.idmap` parameters added to the configuration of each LXC in `/etc/pve/lxc/` for example viktor.kalenyk@mb3:/etc/pve/lxc$ sudo cat 3018.conf arch: amd64 cores: 2 cpulimit: 0.5 hostname: mongo3 memory: 2048 net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=02:4C:0E:4A:88:77,ip=dhcp,ip6=dhcp,tag=107,type=veth onboot: 1 ostype: ubuntu rootfs: lxc-disks:subvol-3018-disk-0,acl=1,mountoptions=noatime,replicate=0,size=8G startup: order=68,up=15,down=120 swap: 512 unprivileged: 1 lxc.idmap: u 468000000 1000000000 1000000 lxc.idmap: g 468000000 1000000000 1000000 lxc.idmap: u 0 100000 65536 lxc.idmap: g 0 100000 65536 viktor.kalenyk@mb3:/etc/pve/lxc$



viktor.kalenyk@mb3:~$ sudo pveversion -v [sudo] password for viktor.kalenyk: Sorry, try again. [sudo] password for viktor.kalenyk: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TERMINAL = "iTerm2", LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to a fallback locale ("en_US.UTF-8"). proxmox-ve: 7.1-2 (running kernel: 5.13.19-6-pve) pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3) pve-kernel-helper: 7.2-1 pve-kernel-5.13: 7.1-9 pve-kernel-5.13.19-6-pve: 5.13.19-15 ceph-fuse: 15.2.15-pve1 corosync: 3.1.5-pve2 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.22-pve2 libproxmox-acme-perl: 1.4.2 libproxmox-backup-qemu0: 1.2.0-1 libpve-access-control: 7.1-7 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.2-2 libpve-guest-common-perl: 4.1-1 libpve-http-server-perl: 4.1-1 libpve-storage-perl: 7.1-2 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.12-1 lxcfs: 4.0.12-pve1 novnc-pve: 1.3.0-2 openvswitch-switch: 2.15.0+ds1-2+deb11u1 proxmox-backup-client: 2.1.6-1 proxmox-backup-file-restore: 2.1.6-1 proxmox-mini-journalreader: 1.3-1 proxmox-widget-toolkit: 3.4-9 pve-cluster: 7.1-3 pve-container: 4.1-4 pve-docs: 7.1-2 pve-edk2-firmware: 3.20210831-2 pve-firewall: 4.2-5 pve-firmware: 3.4-1 pve-ha-manager: 3.3-3 pve-i18n: 2.6-2 pve-qemu-kvm: 6.2.0-5 pve-xtermjs: 4.16.0-1 qemu-server: 7.1-5 smartmontools: 7.2-pve3 spiceterm: 3.2-2 swtpm: 0.7.1~bpo11+1 vncterm: 1.7-1 zfsutils-linux: 2.1.4-pve1 viktor.kalenyk@mb3:~$


I am restore the container a different user
 
remove from `/usr/share/lxc/config/common.conf` lines lxc.idmap: u 468000000 1000000000 1000000 lxc.idmap: g 468000000 1000000000 1000000 `lxc.idmap` parameters added to the configuration of each LXC in `/etc/pve/lxc/` for example viktor.kalenyk@mb3:/etc/pve/lxc$ sudo cat 3018.conf arch: amd64 cores: 2 cpulimit: 0.5 hostname: mongo3 memory: 2048 net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=02:4C:0E:4A:88:77,ip=dhcp,ip6=dhcp,tag=107,type=veth onboot: 1 ostype: ubuntu rootfs: lxc-disks:subvol-3018-disk-0,acl=1,mountoptions=noatime,replicate=0,size=8G startup: order=68,up=15,down=120 swap: 512 unprivileged: 1 lxc.idmap: u 468000000 1000000000 1000000 lxc.idmap: g 468000000 1000000000 1000000 lxc.idmap: u 0 100000 65536 lxc.idmap: g 0 100000 65536 viktor.kalenyk@mb3:/etc/pve/lxc$
Yes, the issue is most likely here. Proxmox VE is not aware of the settings in /usr/share/lxc/config/common.conf. Instead, you need to add them to each LXC configuration file individually. To restore the backup, you probably need to modify the configuration file within the backup to include this mapping.
 
Yes, the issue is most likely here. Proxmox VE is not aware of the settings in /usr/share/lxc/config/common.conf. Instead, you need to add them to each LXC configuration file individually. To restore the backup, you probably need to modify the configuration file within the backup to include this mapping.

look when here /etc/pve/lxc/<lxcid>.conf
delete lines

lxc.idmap: u 468000000 1000000000 1000000
lxc.idmap: g 468000000 1000000000 1000000
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 65536


then the following command is formed

INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client...

-m u:0:100000:65536 -m g:0:100000:65536

and we have this result

INFO: failed to open file: ".Xauthority": access denied
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
INFO: failed to open file: ".config": access denied
INFO: failed to open file: ".local": access denied
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
INFO: failed to open file: ".mysql_history": access denied

backup just ignores these files

here is the full backup log:

()
INFO: starting new backup job: vzdump 555 --storage backup2-private-all --remove 0 --node mb3 --mode snapshot
INFO: Starting Backup of VM 555 (lxc)
INFO: Backup started at 2022-07-27 09:53:25
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: rew-proxysql2test
INFO: including mount point rootfs ('/') in backup
INFO: creating Proxmox Backup Server archive 'ct/555/2022-07-27T09:53:25Z'
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp110507_555/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/var/log/lastlog --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 555 --backup-time 1658915605 --repository pbs_backup_user@pbs@backup2.us.com:private-all
INFO: Starting backup: ct/555/2022-07-27T09:53:25Z
INFO: Client name: mb3
INFO: Starting backup protocol: Wed Jul 27 09:53:25 2022
INFO: No previous manifest available.
INFO: Upload config file '/var/tmp/vzdumptmp110507_555/etc/vzdump/pct.conf' to 'pbs_backup_user@pbs@backup2.us.com:8007:private-all' as pct.conf.blob
INFO: Upload directory '/mnt/vzsnap0' to 'pbs_backup_user@pbs@backup2.com:8007:private-all' as root.pxar.didx
INFO: failed to open file: ".Xauthority": access denied
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
INFO: failed to open file: ".config": access denied
INFO: failed to open file: ".local": access denied
INFO: failed to open file: ".bash_history": access denied
INFO: failed to open file: ".cache": access denied
INFO: failed to open file: ".mysql_history": access denied
INFO: root.pxar: had to backup 1.391 GiB of 1.437 GiB (compressed 425.446 MiB) in 10.75s
INFO: root.pxar: average backup speed: 132.56 MiB/s
INFO: root.pxar: backup was done incrementally, reused 46.815 MiB (3.2%)
INFO: Uploaded backup catalog (715.332 KiB)
INFO: Duration: 10.78s
INFO: End Time: Wed Jul 27 09:53:36 2022
INFO: Finished Backup of VM 555 (00:00:11)
INFO: Backup finished at 2022-07-27 09:53:36
INFO: Backup job finished successfully
TASK OK



but we need these files from home directories and when here /etc/pve/lxc/<lxcid>.conf
add lines

lxc.idmap: u 468000000 1000000000 1000000
lxc.idmap: g 468000000 1000000000 1000000
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 65536


then a command is formed and everything is backed up well
NFO: run: lxc-usernsexec -m u:468000000:1000000000:1000000 -m g:468000000:1000000000:1000000 -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client

-m u:468000000:1000000000:1000000 -m g:468000000:1000000000:1000000 -m u:0:100000:65536 -m g:0:100000:65536

because of this
-m u:468000000:1000000000:1000000 -m g:468000000:1000000000:1000000

but the problem is that when restoring the container, there are no these arguments

-m u:468000000:1000000000:1000000 -m g:468000000:1000000000:1000000

(full restore log)

ecovering backed-up configuration from 'backup2-private-all:backup/ct/4017/2022-07-21T14:18:24Z'
restoring 'backup2-private-all:backup/ct/4017/2022-07-21T14:18:24Z' now..
Error: error extracting archive - error at entry ".bash_history": failed to set ownership: Invalid argument (os error 22)
TASK ERROR: unable to restore CT 553 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/4017/2022-07-21T14:18:24Z root.pxar /var/lib/lxc/553/rootfs --allow-existing-dirs --repository pbs_backup_user@pbs@backup2.us..com:private-all' failed: exit code 255

the question is, how can I substitute these arguments in the restore command, and not just the backup?
-m u:468000000:1000000000:1000000 -m g:468000000:1000000000:1000000
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!