Create Backup, not enough local space

0C0C0C

New Member
Jul 12, 2019
9
0
1
39
I'm trying to move a big Container from an Debian 9 Proxmox 5 System to a new Debian 10 Proxmox 6 System. The Container has a local disk on ZFS and 2 Mount Points on ZFS.

When i'm trying to create Backup,i got the problem that i do not have enough local space.
I mounted an external Storage using SMB, but when im trying to create a backup to this storage, Proxmox still forces to use the local storage to create the backup and its not working because... not enough space.

I tried to find Solutions here and using google but the only thing i can find is: use backup and restore but nothing handling my problem.

The old System VM Storage is a manual created ZFS Pool, on the new system i got 2 ZFS Mirrors created in Proxmox 6.

The only idea i found is, creating the dump using a shell, and define the tmpdir in the vzdump command, but i dont know if this is working because... proxmox already forces not using a direct backup to my SMB Storage and takes a local TMP Directory.
 
No, it's possible to chose the CIFS Storage as Backup Target, but when starting a backup, this gets shown:

upload_2019-7-27_14-40-19.png

Proxmox uses the local storage (/var/tmp/...) for temporary files, and the local storage has not enough space to create the backup.

What i tried now, was changing the /etc/vzdump.conf and change the tmpdir var to a folder in my external Storage mount. The external Backup Storage is now mounted using SSHFS instead of CIFS and added to proxmox as "Directory". When starting a backup now, i get these errors:

upload_2019-7-27_14-41-32.png

But... the Backup is running... i just see a *.dat file instead of a *.tar file, but maybe because the backup is still running. The SSHFS Mount is done using options "defaults,_netdev".
 
OK backup is done now, but restore on new server fails...

Code:
extracting archive '/opt/sh2tosh3/dump/vzdump-lxc-107-2019_07_27-12_14_28.tar'
tar: -: Cannot open: Permission denied
tar: Error is not recoverable: exiting now
TASK ERROR: unable to restore CT 101 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/101/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2

After Moving the dump to the local storage of the new server, it starts, but again with 2 Errors, like when creating the backup, but: the restore is running right now... soooo, maybe the Container works after it, we will see...
Code:
extracting archive '/stor-zfs-2tb/test/dump/vzdump-lxc-107-2019_07_27-12_14_28.tar'
tar: ./etc/vzdump/pct.conf: Cannot change ownership to uid 208687, gid 208687: Invalid argument
tar: ./etc/vzdump/pct.fw: Cannot change ownership to uid 208687, gid 208687: Invalid argument
 
OK it fails completely...
Code:
tar: ./etc/vzdump/pct.conf: Cannot change ownership to uid 208687, gid 208687: Invalid argument
tar: ./etc/vzdump/pct.fw: Cannot change ownership to uid 208687, gid 208687: Invalid argument
tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation not permitted
tar: ./var/spool/postfix/dev/random: Cannot mknod: Operation not permitted
Total bytes read: 209769226240 (196GiB, 674MiB/s)
tar: Exiting with failure status due to previous errors
TASK ERROR: unable to restore CT 101 - command 'set -o pipefail && cstream -t 0 | lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/101/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2
 
Try restoring as a 'privileged' container (the default in the GUI got changed to unprivileged recently - and the last error you posted indicate that this used to be a privileged container.

hope this helps!
 
  • Like
Reactions: 0C0C0C
Ah shit lol :D you're right... it worked... now i need to test if a restore in shell using "-unprivileged" with "-ignore-unpack-errors 1" is working to get an unprivileged container of the privileged backup.
 
hmm - not sure the restore and lxc start do quite a few things in between - so I never considered this an option.
Please report how it works out for you!

an alternative could be to setup a new unprivileged container and use `rsync` from within the containers to get all relevant data to the new container and then switch the config (name, ip, ....)

Good luck!
 
  • Like
Reactions: 0C0C0C
I got expected errors, but the Container was created and not destroyed like in GUI Restore, and it is starting, but because of the used Services inside of the VM i get some problems with mounts and services.

Code:
extracting archive '/stor-zfs-2tb/test/dump/vzdump-lxc-107-2019_07_27-12_14_28.tar'
tar: ./etc/vzdump/pct.conf: Cannot change ownership to uid 208687, gid 208687: Invalid argument
tar: ./etc/vzdump/pct.fw: Cannot change ownership to uid 208687, gid 208687: Invalid argument
tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation not permitted
tar: ./var/spool/postfix/dev/random: Cannot mknod: Operation not permitted
Total bytes read: 209769226240 (196GiB, 648MiB/s)
tar: Exiting with failure status due to previous errors
Detected container architecture: amd64

So best and safe way is really to create new containers and reinstall the services / transfer the data.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!