Hi,
I have two nodes, pve1 and pve2, in a cluster. I created a LXC container on pve1, which uses a ZFS subvolume of size 2G. I successfuly made a backup of this container on a network share and I also checked whether that backup can be restored successfully, which works fine on pve1. However, on pve2, I have added the same network share, and tried to restore the same LXC container. The configuration is exactly the same; i also have ZFS pools with the same names, so I thought it should be possible to restore the LXC container. However, the restore in the GUI takes forever and then fails, so I switched over to the command line and used:
which should restore the container backup from pve1 on pve2, and the new container ID shall be 888. But the process failed with a lot of errors:
even though there IS enough space left on the device! the container's disk size is 2G, and I have plenty of disk space free (>>100G) so that should not be an issue.
I then started to read through the forums and search on the net, and somewhere, I found the advice to use
and this actually did the trick and worked fine! So I would like to understand: why can I restore the vzdump backup on pve1, but not on pve2, and why does it think the disk is full, even though it is not? and why do I need to specify the rootfs option?
I have two nodes, pve1 and pve2, in a cluster. I created a LXC container on pve1, which uses a ZFS subvolume of size 2G. I successfuly made a backup of this container on a network share and I also checked whether that backup can be restored successfully, which works fine on pve1. However, on pve2, I have added the same network share, and tried to restore the same LXC container. The configuration is exactly the same; i also have ZFS pools with the same names, so I thought it should be possible to restore the LXC container. However, the restore in the GUI takes forever and then fails, so I switched over to the command line and used:
Code:
pct restore 888 vzdump-lxc-105-2019_08_24-12_13_35.tar.lzo
which should restore the container backup from pve1 on pve2, and the new container ID shall be 888. But the process failed with a lot of errors:
Code:
Formatting '/var/lib/vz/images/888/vm-888-disk-0.raw', fmt=raw size=2147483648
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: a957aaf6-ae81-4606-b846-e1a47c919847
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done
extracting archive '/mnt/pve/miradara1/dump/vzdump-lxc-105-2019_08_24-12_13_35.tar.lzo'
tar: ./usr/lib/gcc/x86_64-linux-gnu/5/cc1: Cannot write: No space left on device
tar: ./usr/lib/gcc/x86_64-linux-gnu/8: Cannot mkdir: No space left on device
tar: ./usr/lib/gcc/x86_64-linux-gnu/6: Cannot mkdir: No space left on device
tar: ./usr/lib/gcc/x86_64-linux-gnu/7: Cannot mkdir: No space left on device
tar: ./usr/lib/gcc/x86_64-linux-gnu/7: Cannot mkdir: No space left on device
tar: ./usr/lib/gcc/x86_64-linux-gnu/7/cc1: Cannot open
tar: ./usr/lib/os-release: Cannot write: No space left on device
....
even though there IS enough space left on the device! the container's disk size is 2G, and I have plenty of disk space free (>>100G) so that should not be an issue.
I then started to read through the forums and search on the net, and somewhere, I found the advice to use
Code:
ct restore 909 vzdump-lxc-105-2019_08_24-12_13_35.tar.lzo --rootfs local:4
and this actually did the trick and worked fine! So I would like to understand: why can I restore the vzdump backup on pve1, but not on pve2, and why does it think the disk is full, even though it is not? and why do I need to specify the rootfs option?