Hi, I'm having a problem trying to restore a backup of an lxc that has mount points:
In PVE I have an lxc whose main storage is located in local-lvm. Additionally I have a RAIDZ to which I have given access to the LXC through mount point, specifically mp0.
Local-LVM has a size of 8GB while the Raidz has a size of 20GB of which 15GB is already occupied. In the lxc I have mapped the raid in /mnt/data:
I have backed up the LXC on my PBS (including the MP). When I try to restore it, PVE doesn't mount the MP first, but restores everything directly to the local-lvm, occupying the space.
What's the right way to restore this?
I post the error log that gives me when restoring:
In PVE I have an lxc whose main storage is located in local-lvm. Additionally I have a RAIDZ to which I have given access to the LXC through mount point, specifically mp0.
Local-LVM has a size of 8GB while the Raidz has a size of 20GB of which 15GB is already occupied. In the lxc I have mapped the raid in /mnt/data:
mp0: vault:subvol-100-disk-0,mp=/mnt/data,backup=1,size=20GI have backed up the LXC on my PBS (including the MP). When I try to restore it, PVE doesn't mount the MP first, but restores everything directly to the local-lvm, occupying the space.
What's the right way to restore this?
I post the error log that gives me when restoring:
recovering backed-up configuration from 'vault_backup:backup/ct/100/2025-02-27T13:26:51Z' WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "vm-100-disk-0" created. WARNING: Sum of all thin volume sizes (8.00 GiB) exceeds the size of thin pool pve/data and the amount of free space in volume group (8.00 MiB).Creating filesystem with 2097152 4k blocks and 524288 inodesFilesystem UUID: fffffffffffffffffffffSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "vm-100-disk-1" created. WARNING: Sum of all thin volume sizes (28.00 GiB) exceeds the size of thin pool pve/data and the size of whole volume group (<19.50 GiB).Creating filesystem with 5242880 4k blocks and 1310720 inodesFilesystem UUID: ffffffffffffffffffffffffffffSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, restoring 'vault_backup:backup/ct/100/2025-02-27T13:26:51Z' now..Error: error extracting archive - encountered unexpected error during extraction: error at entry "archivo_aleatorio.bin": failed to extract file: failed to copy file contents: Input/output error (os error 5) Logical volume "vm-100-disk-0" successfully removed. Logical volume "vm-100-disk-1" successfully removed.TASK ERROR: unable to restore CT 100 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/100/2025-02-27T13:26:51Z root.pxar /var/lib/lxc/100/rootfs --allow-existing-dirs --repository