Restore backup failed

speedlnx

Active Member
Feb 7, 2016
36
1
28
41
I'm trying to restore a backup from my PBS server to a new Proxmox. Everything goes fine with other smaller machine but it stuck on a 250gb one.
I modify /etc/vzdump.conf to have the tmp dir on a bigger dataset (the server has a ZFS storage).

vzdump.conf

Code:
# vzdump default settings

tmpdir: /datastore/temp
#dumpdir: DIR
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: KBPS
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
#size: MB
#stdexcludes: BOOLEAN
#mailto: ADDRESSLIST
#maxfiles: N
#script: FILENAME
#exclude-path: PATHLIST
#pigz: N

Code:
NAME                          USED  AVAIL     REFER  MOUNTPOINT
datastore                    16.4G  3.36T      160K  /datastore
datastore/subvol-100-disk-0  12.7G  7.29G     12.7G  /datastore/subvol-100-disk-0
datastore/subvol-101-disk-0  1.24G  18.8G     1.24G  /datastore/subvol-101-disk-0
datastore/subvol-105-disk-0  2.34G  5.66G     2.34G  /datastore/subvol-105-disk-0
datastore/temp                128K  3.36T      128K  /datastore/temp
datastore/vm-500-disk-0      50.5M  3.36T     50.5M  -

Code:
Error: error extracting archive - error at entry "main.cvd.init": failed to copy file contents: Disk quota exceeded (os error 122)
TASK ERROR: unable to restore CT 114 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=encrypt' '--keyfd=14' ct/114/2020-12-20T04:53:25Z root.pxar /var/lib/lxc/114/rootfs --allow-existing-dirs --repository cloud04@pbs@192.168.103.100:c03' failed: exit code 255
 
Last edited:

speedlnx

Active Member
Feb 7, 2016
36
1
28
41
Yes, But I don't use quota!

Code:
NAME            PROPERTY  VALUE  SOURCE
datastore/temp  quota     none   default
 
Last edited:
Nov 28, 2018
95
7
13
Well, your inodes are full, i guess ^^
clean up your temp und try again.

In this case I think that /temp is really only used temporarily and should be flushed.
Would still wait for a response from Proxmox.
 

speedlnx

Active Member
Feb 7, 2016
36
1
28
41
It's not full, it's just 1% used:


Filesystem Inodes IUsed IFree IUse% Mounted on
[...]
datastore/temp 7218232103 6 7218232097 1% /datastore/temp
 
Nov 28, 2018
95
7
13
Number of files does not have to correspond to the amount on the disk.

Unless I'm completely off (last night was a bit rough after all ...), you have free space, but your maximum amount of files (7,218,232,103) is filled with 7218232097 files. There are six "slots" left over and that's why the operation fails.

EDIT

Inodes _____ IUsed____IFree IUse% Mounted on
7218232103 6 ________7218232097 1% /datastore/temp

oh well ... nvm
 
Last edited:

speedlnx

Active Member
Feb 7, 2016
36
1
28
41
Hello LachCraft, thank you for your time, but I think that my copy/paste alignment give you a wrong idea of what I have...

This is the correct view:

Inodes: 7218232103
IUsed: 6
IFree: 7218232097
IUse%: 1%

Anyway, it's a new server. Datastore was never used before, and disks are new. So no chance that inode are used.
I think the problem is with temp dir that PBS client use. I think it write on the main partition that's only 250GB and not the once I specify in the vzdump.conf
Maybe because the vzdump.conf is used only for the temporary compression (backup) and not for decompression (restore).
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,272
1,334
164
it will directly restore to the newly-allocated volumes.. could you post the backed up container config?
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,272
1,334
164
and the full command you use to restore?
 

che

Active Member
Jul 10, 2020
114
26
28
Hello LachCraft, thank you for your time, but I think that my copy/paste alignment give you a wrong idea of what I have...

This is the correct view:

Inodes: 7218232103
IUsed: 6
IFree: 7218232097
IUse%: 1%

Anyway, it's a new server. Datastore was never used before, and disks are new. So no chance that inode are used.
I think the problem is with temp dir that PBS client use. I think it write on the main partition that's only 250GB and not the once I specify in the vzdump.conf
Maybe because the vzdump.conf is used only for the temporary compression (backup) and not for decompression (restore).
Hi,
AFAIK the PBS client is not using the tmpdir specified in /etc/vzdump.conf (neigther during compression, nor decompression). This is only an option for vzdump backups. I stand to be corrected on this!
Looking at the CLI arguments for the restore seen from the error output, the data is restored to /var/lib/lxc/114/rootfs
 
Last edited:

speedlnx

Active Member
Feb 7, 2016
36
1
28
41
and the full command you use to restore?

Hello I restore from the GUI. This is the lxc conf from the original server:


Code:
arch: amd64
cores: 4
hostname: xxxxxx-xx
memory: 10240
net0: name=eth0,bridge=vmbr1,firewall=1,hwaddr=36:3B:FA:93:78:DA,ip=dhcp,ip6=auto,type=veth
onboot: 1
ostype: ubuntu
rootfs: vms:114/vm-114-disk-0.raw,size=280G
swap: 10240
unprivileged: 1
 
Last edited:

speedlnx

Active Member
Feb 7, 2016
36
1
28
41
I will give you some more information, hope it help:

Original server: server1
PBS Server: backup1
Destination for restore: server2


Server 1 backup on backup1 without any error.
Server 2 is a new server with a new and clear ZFS storage with 3.4T free. Operating system is installed on a separate raid 1 with 250GB SSD storage

I use the GUI of server2 to restore from backup1. I restore other smaller machine without problem (container and vm).

Server2

Code:
root@xxxxxx-xx ~ # df -h
Filesystem                   Size  Used Avail Use% Mounted on
udev                          79G     0   79G   0% /dev
tmpfs                         16G  1.6M   16G   1% /run
/dev/mapper/vg0-root         216G  6.3G  199G   4% /
tmpfs                         79G   43M   79G   1% /dev/shm
tmpfs                        5.0M     0  5.0M   0% /run/lock
tmpfs                         79G     0   79G   0% /sys/fs/cgroup
/dev/md0                     968M  253M  666M  28% /boot
datastore                    3.4T  256K  3.4T   1% /datastore
datastore/subvol-105-disk-0  8.0G  2.4G  5.7G  30% /datastore/subvol-105-disk-0
/dev/fuse                     30M   32K   30M   1% /etc/pve
tmpfs                         16G     0   16G   0% /run/user/0
datastore/subvol-100-disk-0   20G   13G  7.3G  64% /datastore/subvol-100-disk-0
datastore/subvol-101-disk-0   20G  1.3G   19G   7% /datastore/subvol-101-disk-0
datastore/temp               3.4T  128K  3.4T   1% /datastore/temp
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,272
1,334
164
what happens if you use pct restore for restoring the container?

e.g., pct restore NEWCTID BACKUPSTORAGE:ct/114/2020-12-21T04:53:42Z --rootfs vms:500 (this will allocate a 500GB rootfs instead of the 280GB the config says).
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,272
1,334
164
the name of you backup storage on the PVE side..
 

speedlnx

Active Member
Feb 7, 2016
36
1
28
41
the name of you backup storage on the PVE side..
Backups are on a PBS separate server, not a local storage.

Code:
root@xxxxx-xx ~ # pct restore 114 pbs-c03:ct/114/2020-12-21T04:53:42Z --rootfs vms:500
unable to parse PBS volume name 'ct/114/2020-12-21T04:53:42Z'
root@xxxxx-xx ~ #
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,272
1,334
164
sorry, that should be pbs-c03:backup/ct/114/2020-12-21T04:53:42Z
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!