Backup Problem

cainam

Member
Aug 5, 2020
51
0
11
44
Hallo zusammen
Beim Backup kommt immer ein Fehler. Kann mir jemand helfen an was das es liegt? Hier die LOG:



INFO: trying to get global lock - waiting...
INFO: got global lock
INFO: starting new backup job: vzdump 330 --mode snapshot --remove 0 --storage NUC_Backup --node pve --compress zstd
INFO: Starting Backup of VM 330 (lxc)
INFO: Backup started at 2020-10-20 20:21:05
INFO: status = running
INFO: CT Name: Daten
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
Logical volume "snap_vm-330-disk-0_vzdump" created.
WARNING: Sum of all thin volume sizes (188.44 GiB) exceeds the size of thin pool pve/data and the size of whole volume group (<111.29 GiB).
INFO: creating archive '/mnt/pve/NUC_Backup/dump/vzdump-lxc-330-2020_10_20-20_21_05.tar.zst'
INFO: tar: /mnt/pve/NUC_Backup/dump/vzdump-lxc-330-2020_10_20-20_21_05.tmp: Cannot open: Permission denied
INFO: tar: Error is not recoverable: exiting now
INFO: remove vzdump snapshot
Logical volume "snap_vm-330-disk-0_vzdump" successfully removed
ERROR: Backup of VM 330 failed - command 'set -o pipefail && lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar cpf - --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' --one-file-system '--warning=no-file-ignored' '--directory=/mnt/pve/NUC_Backup/dump/vzdump-lxc-330-2020_10_20-20_21_05.tmp' ./etc/vzdump/pct.conf ./etc/vzdump/pct.fw '--directory=/mnt/vzsnap0' --no-anchored '--exclude=lost+found' --anchored '--exclude=./tmp/?*' '--exclude=./var/tmp/?*' '--exclude=./var/run/?*.pid' ./ | zstd --rsyncable '--threads=1' >/mnt/pve/NUC_Backup/dump/vzdump-lxc-330-2020_10_20-20_21_05.tar.dat' failed: exit code 2
INFO: Failed at 2020-10-20 20:21:06
INFO: Backup job finished with errors
TASK ERROR: job errors
 
Hey,

sieht so aus als gibts nicht genug Speicheplatz
Code:
...
WARNING: Sum of all thin volume sizes (188.44 GiB) exceeds the size of thin pool pve/data and the size of whole volume group (<111.29 GiB).
...
 
INFO: tar: /mnt/pve/NUC_Backup/dump/vzdump-lxc-330-2020_10_20-20_21_05.tmp: Cannot open: Permission denied

is the reason. is /mnt/pve/NUC_BACKUP on a network share like NFS?
 
INFO: tar: /mnt/pve/NUC_Backup/dump/vzdump-lxc-330-2020_10_20-20_21_05.tmp: Cannot open: Permission denied

is the reason. is /mnt/pve/NUC_BACKUP on a network share like NFS?

yes, and there is enough space... also VM- can write on the same backups without problem
 
container backups run in unprivileged context, so either move the dumpdir off your network share, or give the unprivileged user access to your network share.
 
Exactly my problem at the moment! Great!

In this regard: Do all LX containers run under the same uid? So, is it sufficient to add only one uid to the ACL?

Edit: I see ACL support in NFS seems unreliable.
 
Last edited:
you can see the mapping in the error: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536, unless you do some advanced manual changes this is identical on all containers.
 
give the user 100000 access? but think carefully whether that is what you want, I gave you the alternative already.
 
give the user 100000 access? but think carefully whether that is what you want, I gave you the alternative already.
sorry for the questions - i am new in ths.
It is just my own private server and there is no extern access... also i like to have an backup on a second drive in case the NUC HDD breaks down.. so makes no much sense to make the backup on same device...
 
I don't know how your network share is setup, so I can't really help you with that. but you need to give the user 100000 write permissions to it if you want to backup directly.
 
As giving 100000 write permissions is the same problem I face at the moment, I might add some thoughts here.

Easiest and by far the most unsecure would be to give 'others' rwx permissions as in mode '777':
Bash:
chmod o+rwx dump/

Better would be to use filesystem ACLs and allow only UID 100000 to read, write and execute at the destination path:
Bash:
setfacl -m u:100000:rwx dump/
Problem here is, the user with UID 100000 is unknown on both NFS client and server. Currently, I try to figure out how to tell the NFS client to consider the NFS server's ACL. Basically, I'm stuck as I don't know where the problem lies: in the server's filesystem, in the client's network mapped filesystem, in the RPC, in Kerberos, or somewhere else. I tried sudo'ing on the NFS client with some existing UIDs but none other than root can write to the NFS mounts at the moment. Maybe, idmap on either client or server are to blame, but reading the man pages does not help much.

As my setup includes having PVE as well as my NFS server joined in a UCS domain (Samba 4), it's more complicated and I try to rejoin the backup server into the domain first to ensure it's not some kind of membership problem. Having done that, I guess whatever solution I come upon will work with your problem, too, then.

BTW, if everybody in this thread is German language native we can switch to German. I don't know why this thread switched to English in the first place :-D
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!