PBS Container Restore Failing

cmdrwgls

New Member
May 10, 2023
3
1
1
Short form:
  • Backups from old server taken, stored on PBS server.
  • New server installed, try to restore containers from PBS.
  • Failure!
The failure No space left on device (os error 28) is clearly wrong. The backup is 25GB (uncompressed), the pool being restored to is an LVM-Thin with 150GB free.

Code:
Proxmox
Virtual Environment 7.4-3
Storage 'pbs' on node 'ripper'
Search:
Logs
()
recovering backed-up configuration from 'pbs:backup/ct/100/2023-05-10T17:18:53Z'
  Logical volume "vm-100-disk-0" created.
Creating filesystem with 2621440 4k blocks and 655360 inodes
Filesystem UUID: 85fbba0f-e121-47b8-8748-682570e5075f
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
restoring 'pbs:backup/ct/100/2023-05-10T17:18:53Z' now..
Error: error extracting archive - error at entry "system@4f410ee9deb94f3597e311fd650232e0-00000000001cc7ea-0005f9ddc109aae0.journal": failed to copy file contents: No space left on device (os error 28)
  Logical volume "vm-100-disk-0" successfully removed
TASK ERROR: unable to restore CT 100 - command '/usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/100/2023-05-10T17:18:53Z root.pxar /var/lib/lxc/100/rootfs --allow-existing-dirs --repository root@pam@10.0.0.205:backups' failed: exit code 255

Help?
 
Hi,
how did you perform the first 2 steps?
  • Backups from old server taken, stored on PBS server.
  • New server installed, try to restore containers from PBS.
  • Failure!
It seems that the filesystem created on vm-100-disk-0 is only 10GB in size, you can check the content of the backed up config
 
this is entirely possible, since ZFS compresses (by default), so you can store more than 10G logical (uncompressed) data on a 10G (compressed) volume. when you now try to restore more than 10G logical data on a volume that does not do compression (LVM thin in this case?), there won't be enough space.

the solution is to either increase the disk size of the source container and do another backup, or, if the source doesn't exist, to restore on the command line and override the rootfs/mpX paramter:

pct restore TARGET_VMID BACKUP_ARCHIVE_VOLID --storage TARGET_STORAGE --rootfs TARGET_STORAGE:NEW_SIZE,OTHER_OPTIONS

everything in CAPS needs to be replaced with the desired values. OTHER_OPTIONS can be retrieved from the backup configuration (you can view that on the GUI as well). if your container had extra mountpoints, you need to specify those as well, else all the data will be restored into a single rootfs volume.
 
  • Like
Reactions: Chris
Hi,
how did you perform the first 2 steps?

It seems that the filesystem created on vm-100-disk-0 is only 10GB in size, you can check the content of the backed up config
Howdy Chris,

The backup was taken from the GUI to a Proxmox Backup Server, all defaults. The restore was started from the GUI from the Proxmox Backup Server, the only parameter chosen was the destination pool, everything else was at defaults.

The original filesystem is 10GB, of which 3.5GB is used. The rest is the backup taking empty filesystems in /dev/shm & /run (and by empty I mean 0 bytes) and backing them up without compression, which is odd. Regardless, the entire thing is 26GB backed up, its size on disk is closer to 3.5GB. Free space is NOT the issue, the drive in the original server this container runs on is 128GB, so 150GB on the new server is clearly more than enough.
 
this is entirely possible, since ZFS compresses (by default), so you can store more than 10G logical (uncompressed) data on a 10G (compressed) volume. when you now try to restore more than 10G logical data on a volume that does not do compression (LVM thin in this case?), there won't be enough space.

the solution is to either increase the disk size of the source container and do another backup, or, if the source doesn't exist, to restore on the command line and override the rootfs/mpX paramter:

pct restore TARGET_VMID BACKUP_ARCHIVE_VOLID --storage TARGET_STORAGE --rootfs TARGET_STORAGE:NEW_SIZE,OTHER_OPTIONS

everything in CAPS needs to be replaced with the desired values. OTHER_OPTIONS can be retrieved from the backup configuration (you can view that on the GUI as well). if your container had extra mountpoints, you need to specify those as well, else all the data will be restored into a single rootfs volume.
Howdy Fabian,

Well shut my mouth it's compressed to hell ... compress ratio of 6.95X, must be all the aforementioned empty filesystems. I've got some checking to do but for now let's assume you gentlemen are correct.

Thanks Fabian & Chris.
 
  • Like
Reactions: fabian

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!