LXC Restore Fails on Extraction: Disk Quota Exceeded (os error 122)

s28400

New Member
Nov 2, 2021
4
0
1
28
Hello,

I have recently upgraded my PVE data store (zfs) with some larger drives and am having an issue restoring my largest LXC container. Once the transfer is completed, it throws the following error before deleting the copied data:

Code:
recovering backed-up configuration from 'osmium:backup/ct/113/2022-08-12T19:35:03Z'
restoring 'osmium:backup/ct/113/2022-08-12T19:35:03Z' now..
Error: error extracting archive - error at entry "magic.mgc": failed to copy file contents: Disk quota exceeded (os error 122)
TASK ERROR: unable to restore CT 113 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/113/2022-08-12T19:35:03Z root.pxar /var/lib/lxc/113/rootfs --allow-existing-dirs --repository root@pam@osmium.mydomain.com:hdd_datastore' failed: exit code 255

The LXC has two disks, a small root disk of 40GB and a larger 12TB mountpoint. I am restoring both of these to my new datastore which has ~70TB free space. I was able to successfully restore a smaller LXC with a 100GB root disk and 1TB mount without this issue.
I don't exactly understand what is happening here. Is PVE trying to extract to its installation disk instead of the large datastore?
I am restoring from the GUI, is there a CLI command I should use instead?

I read through these posts here and here and am thinking the command they provided to allocate more space could be worth a try:
e.g., pct restore NEWCTID BACKUPSTORAGE:ct/114/2020-12-21T04:53:42Z --rootfs vms:500 (this will allocate a 500GB rootfs instead of the 280GB the config says).
I don't know the syntax for specifying the non-rootfs lxc disk though and I want to be confident before attempting.

I am happy to provide any additional information to help solve this issue. Unfortunately, this restore takes 3 days per attempt to transfer the data over the network and I don't think there is a way to copy from PBS to PVE for local restore so the iteration speed is quite slow. Any help with this from someone who knows why this is happening would be very much appreciated.

Thanks in advance!
 
Last edited:
Also for reference, this is the config for the LXC backup I am trying to restore:
Code:
arch: amd64
cores: 6
features: nesting=1
hostname: ceres
memory: 8192
mp0: hdd_datastore:subvol-113-disk-1,mp=/data,backup=1,size=12000G
net0: name=eth0,bridge=vmbr0,hwaddr=32:EE:7E:E6:9F:FE,ip=dhcp,ip6=dhcp,tag=50,type=veth
onboot: 1
ostype: ubuntu
rootfs: ssd_datastore:subvol-113-disk-0,size=40G
swap: 512
unprivileged: 1
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
 
After reading up on the pct restore command and arguments here, I am trying the following manual restore command:
Code:
pct restore 113 osmium:backup/ct/113/2022-08-12T19:35:03Z --rootfs hdd_datastore:100 --mp0 hdd_datastore:20000,mp=/data --storage hdd_datastore
As far as I can tell the original issue is due to either the rootfs disk or the mount point volume being smaller (defined by the config) than the resulting extracted result. Hopefully, the increased volume sizes (100GB vs 40GB for the rootfs and 20TB vs 12TB for the mount point) will remedy this issue. I will report back the result in ~60 hours.
 
The above command worked like a charm, hope this is helpful for anyone that runs into this same issue.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!