Hello,
I have recently upgraded my PVE data store (zfs) with some larger drives and am having an issue restoring my largest LXC container. Once the transfer is completed, it throws the following error before deleting the copied data:
The LXC has two disks, a small root disk of 40GB and a larger 12TB mountpoint. I am restoring both of these to my new datastore which has ~70TB free space. I was able to successfully restore a smaller LXC with a 100GB root disk and 1TB mount without this issue.
I don't exactly understand what is happening here. Is PVE trying to extract to its installation disk instead of the large datastore?
I am restoring from the GUI, is there a CLI command I should use instead?
I read through these posts here and here and am thinking the command they provided to allocate more space could be worth a try:
I am happy to provide any additional information to help solve this issue. Unfortunately, this restore takes 3 days per attempt to transfer the data over the network and I don't think there is a way to copy from PBS to PVE for local restore so the iteration speed is quite slow. Any help with this from someone who knows why this is happening would be very much appreciated.
Thanks in advance!
I have recently upgraded my PVE data store (zfs) with some larger drives and am having an issue restoring my largest LXC container. Once the transfer is completed, it throws the following error before deleting the copied data:
Code:
recovering backed-up configuration from 'osmium:backup/ct/113/2022-08-12T19:35:03Z'
restoring 'osmium:backup/ct/113/2022-08-12T19:35:03Z' now..
Error: error extracting archive - error at entry "magic.mgc": failed to copy file contents: Disk quota exceeded (os error 122)
TASK ERROR: unable to restore CT 113 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/113/2022-08-12T19:35:03Z root.pxar /var/lib/lxc/113/rootfs --allow-existing-dirs --repository root@pam@osmium.mydomain.com:hdd_datastore' failed: exit code 255
The LXC has two disks, a small root disk of 40GB and a larger 12TB mountpoint. I am restoring both of these to my new datastore which has ~70TB free space. I was able to successfully restore a smaller LXC with a 100GB root disk and 1TB mount without this issue.
I don't exactly understand what is happening here. Is PVE trying to extract to its installation disk instead of the large datastore?
I am restoring from the GUI, is there a CLI command I should use instead?
I read through these posts here and here and am thinking the command they provided to allocate more space could be worth a try:
I don't know the syntax for specifying the non-rootfs lxc disk though and I want to be confident before attempting.e.g., pct restore NEWCTID BACKUPSTORAGE:ct/114/2020-12-21T04:53:42Z --rootfs vms:500 (this will allocate a 500GB rootfs instead of the 280GB the config says).
I am happy to provide any additional information to help solve this issue. Unfortunately, this restore takes 3 days per attempt to transfer the data over the network and I don't think there is a way to copy from PBS to PVE for local restore so the iteration speed is quite slow. Any help with this from someone who knows why this is happening would be very much appreciated.
Thanks in advance!
Last edited: