[SOLVED] Restore of container backup from ZFS to LVM on a different server fails : because of compression in ZFS.

Marc Ballat

Renowned Member
Dec 28, 2015
38
5
73
56
Hi there,

I have two Proxmox VE 8 servers called proxmox1 and proxmox3.

The backups of proxmox1 are validated every night (copy to Debian desktop, decompress, start and ping).

Proxmox3 is an old Dell Precision T1650 meant to serve as backup in case proxmox1 fails. It has a 512MB NvMe on a PCI adapter and two 1TB hard drives for storage of backup files coming from proxmox1.

Code:
root@proxmox1:~# zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
nas-zfs                       2.49T  4.65T  28.3G  /nas-zfs
nas-zfs/backup                2.46T  4.65T  2.46T  /nas-zfs/backup
nas-zfs/subvol-131-disk-0       96K  1024G    96K  /nas-zfs/subvol-131-disk-0
rpool                         37.2G   862G    96K  /rpool
rpool/ROOT                    11.6G   862G    96K  /rpool/ROOT
rpool/ROOT/pve-1              11.6G   862G  11.6G  /
rpool/data                    25.4G   862G   120K  /rpool/data
ssd-zfs                        138G   761G   104K  /ssd-zfs
ssd-zfs/subvol-120-disk-0     6.94G  3.06G  6.94G  /ssd-zfs/subvol-120-disk-0
ssd-zfs/vm-201-disk-0         58.9G   779G  40.6G  -
ssd-zfs/vm-201-disk-1         10.2G   768G  3.08G  -

Code:
root@proxmox1:~$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
...
nvme0n1     259:0    0 931.5G  0 disk
|-nvme0n1p1 259:1    0 931.5G  0 part
`-nvme0n1p9 259:2    0     8M  0 part

And here is the backup file for my container :
Code:
-rw-r--r-- 1 root root  4249469750 Feb 19 00:43 vzdump-lxc-120-2025_02_19-00_36_44.tar.gz

And here is proxmox3 :

Code:
root@proxmox3:~# lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda            8:0    0 931.5G  0 disk
├─sda1         8:1    0  1007K  0 part
├─sda2         8:2    0     1G  0 part
└─sda3         8:3    0 930.5G  0 part
sdb            8:16   0 931.5G  0 disk
├─sdb1         8:17   0  1007K  0 part
├─sdb2         8:18   0     1G  0 part
└─sdb3         8:19   0 930.5G  0 part
nvme0n1      259:0    0 476.9G  0 disk
├─nvme0n1p1  259:1    0  1007K  0 part
├─nvme0n1p2  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3  259:3    0 475.9G  0 part
  ├─pve-swap 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root 252:1    0    96G  0 lvm  /
  └─pve-data 252:2    0 371.9G  0 lvm  /mnt/data

Code:
root@proxmox3:~# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
rpool  92.2G   807G  92.2G  /rpool

Code:
root@proxmox3:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1   3   0 wz--n- <475.94g    0

Code:
root@proxmox3:~# lvs
  LV   VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve -wi-ao---- <371.94g                                                  
  root pve -wi-ao----   96.00g                                                  
  swap pve -wi-ao----    8.00g

Note that I have removed the LVM thin pool created by the installer and recreated a logical volume mounted as /mnt/data.

I can restore the backup to rpool :
Code:
recovering backed-up configuration from 'backup-proxmox1:backup/vzdump-lxc-120-2025_01_15-00_43_57(OK).tar.gz'
restoring 'backup-proxmox1:backup/vzdump-lxc-120-2025_01_15-00_43_57(OK).tar.gz' now..
extracting archive '/rpool/backup-proxmox1/dump/vzdump-lxc-120-2025_01_15-00_43_57(OK).tar.gz'
tar: ./var/log/journal/fe2dcc346dde445a9e95482e37c85663/user-1000@0925f6f94d7949339511f2961218d5a2-000000000014f2c6-00061595f09a143f.journal: Warning: Cannot acl_from_text: Invalid argument
...
Total bytes read: 22722068480 (22GiB, 72MiB/s)
merging backed-up and given configuration..
TASK OK

And here is the result of mount -l after the restore operation :
Code:
root@proxmox3:~# mount -l
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16378276k,nr_inodes=4094569,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3282416k,mode=755,inode64)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=4817)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
ramfs on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
/dev/nvme0n1p2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
ramfs on /run/credentials/systemd-sysctl.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
rpool on /rpool type zfs (rw,relatime,xattr,noacl,casesensitive)
ramfs on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=3282416k,nr_inodes=820604,mode=700,inode64)
/dev/mapper/pve-data on /mnt/data type ext4 (rw,relatime)
rpool/subvol-100-disk-0 on /rpool/subvol-100-disk-0 type zfs (rw,relatime,xattr,posixacl,casesensitive)

Code:
root@proxmox3:~# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
rpool                    97.3G   802G  92.2G  /rpool
rpool/subvol-100-disk-0  5.08G  4.92G  5.08G  /rpool/subvol-100-disk-0

But it fails if I try a restore to /mnt/data (directory) :
Code:
recovering backed-up configuration from 'backup-proxmox1:backup/vzdump-lxc-120-2025_01_15-00_43_57(OK).tar.gz'
Formatting '/rpool/data/images/100/vm-100-disk-0.raw', fmt=raw size=10737418240 preallocation=off
Creating filesystem with 2621440 4k blocks and 655360 inodes
Filesystem UUID: 3570c745-0ec4-4982-87c7-4b2768bf346d
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
restoring 'backup-proxmox1:backup/vzdump-lxc-120-2025_01_15-00_43_57(OK).tar.gz' now..
extracting archive '/rpool/backup-proxmox1/dump/vzdump-lxc-120-2025_01_15-00_43_57(OK).tar.gz'
tar: ./home/lmetv/.forever/n1fz.log: Cannot write: No space left on device
...
Total bytes read: 22722068480 (22GiB, 133MiB/s)
tar: Exiting with failure status due to previous errors
TASK ERROR: unable to restore CT 100 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - -z --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/100/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2

The same happens if I try to clone the container from rpool to /mnt/data.

I am clueless...
 
Last edited:
> tar: ./home/lmetv/.forever/n1fz.log: Cannot write: No space left on device

If you get a successful restore on rpool, resize the container's vdisk up a couple of GB and move it to /mnt/data

(lxc) Resources / Root disk / Volume action / Move storage
 
I tried but without success.

I increased it from 10 GB to 11, 12 and 20. It's always the same.

The failure is even more strange as a restore of a VM's backup to /mnt/data succeeds !

Code:
root@proxmox3:/# ls -lh /mnt/data/images/201
total 58G
-rw-r----- 1 root root 58G Feb 21 07:33 vm-201-disk-0.raw
-rw-r----- 1 root root 10G Feb 21 07:33 vm-201-disk-1.raw
 
Strange thing discovered while investigating further.

Code:
root@proxmox3:/rpool/subvol-120-disk-0# du -bch home/*
...
16G     total

While :
Code:
root@proxmox3:/rpool/subvol-120-disk-0# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
rpool                    98.7G   801G  92.2G  /rpool
rpool/subvol-120-disk-0  6.52G  3.51G  6.49G  /rpool/subvol-120-disk-0

How can I have a total size reported for /home bigger than the zfs subvolume total size ?
 
Strange thing discovered while investigating further.

Code:
root@proxmox3:/rpool/subvol-120-disk-0# du -bch home/*
...
16G     total

While :
Code:
root@proxmox3:/rpool/subvol-120-disk-0# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
rpool                    98.7G   801G  92.2G  /rpool
rpool/subvol-120-disk-0  6.52G  3.51G  6.49G  /rpool/subvol-120-disk-0

How can I have a total size reported for /home bigger than the zfs subvolume total size ?

Well the answer is : ZFS can do compression and does it ! /home contains big log files that compress well.

When you restore from ZFS to ZFS, you achieve a comparable compression result. But when you restore to another FS without compression (e.g. ext4 on LVM or LVM-Thin), your root FS grows beyond the size that it seems to have on the source machine.

I removed the old logfiles, backed up and restore worked.
 
Last edited:
  • Like
Reactions: waltar
> tar: ./home/lmetv/.forever/n1fz.log: Cannot write: No space left on device

If you get a successful restore on rpool, resize the container's vdisk up a couple of GB and move it to /mnt/data

(lxc) Resources / Root disk / Volume action / Move storage
If I had increased it a little bit more, it would have worked with your advice. See my last answer.
 
You can try something like this to see useful information like the compression ratio
Bash:
zfs list -ospace,logicalused,compression,compressratio -rS compressratio
 
  • Like
Reactions: waltar