Restore LXC failed

Mrt12

Well-Known Member
May 19, 2019
144
11
58
44
CH
Hi,
I have two nodes, pve1 and pve2, in a cluster. I created a LXC container on pve1, which uses a ZFS subvolume of size 2G. I successfuly made a backup of this container on a network share and I also checked whether that backup can be restored successfully, which works fine on pve1. However, on pve2, I have added the same network share, and tried to restore the same LXC container. The configuration is exactly the same; i also have ZFS pools with the same names, so I thought it should be possible to restore the LXC container. However, the restore in the GUI takes forever and then fails, so I switched over to the command line and used:

Code:
pct restore 888 vzdump-lxc-105-2019_08_24-12_13_35.tar.lzo

which should restore the container backup from pve1 on pve2, and the new container ID shall be 888. But the process failed with a lot of errors:

Code:
Formatting '/var/lib/vz/images/888/vm-888-disk-0.raw', fmt=raw size=2147483648
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done                           
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: a957aaf6-ae81-4606-b846-e1a47c919847
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912

Allocating group tables: done                           
Writing inode tables: done                           
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: done

extracting archive '/mnt/pve/miradara1/dump/vzdump-lxc-105-2019_08_24-12_13_35.tar.lzo'
tar: ./usr/lib/gcc/x86_64-linux-gnu/5/cc1: Cannot write: No space left on device
tar: ./usr/lib/gcc/x86_64-linux-gnu/8: Cannot mkdir: No space left on device
tar: ./usr/lib/gcc/x86_64-linux-gnu/6: Cannot mkdir: No space left on device
tar: ./usr/lib/gcc/x86_64-linux-gnu/7: Cannot mkdir: No space left on device
tar: ./usr/lib/gcc/x86_64-linux-gnu/7: Cannot mkdir: No space left on device
tar: ./usr/lib/gcc/x86_64-linux-gnu/7/cc1: Cannot open
tar: ./usr/lib/os-release: Cannot write: No space left on device
....

even though there IS enough space left on the device! the container's disk size is 2G, and I have plenty of disk space free (>>100G) so that should not be an issue.
I then started to read through the forums and search on the net, and somewhere, I found the advice to use

Code:
ct restore 909 vzdump-lxc-105-2019_08_24-12_13_35.tar.lzo --rootfs local:4

and this actually did the trick and worked fine! So I would like to understand: why can I restore the vzdump backup on pve1, but not on pve2, and why does it think the disk is full, even though it is not? and why do I need to specify the rootfs option?
 
The backup of the container includes the full disk image path (in PVE notation) and not just the zpool name (you can check this using "Show Configuration" in the container's backup view). If the exact image (something like 'zpool/vm-888-disk-0') cannot be found, it will fall back to creating a new image - on your default storage, which, AFAICT from your post is not the zpool you want. If you have given the container the same vmid on both, and the disk image is called the same, it should theoretically work from now on, even without the rootfs parameter (since the disk and the container now exist).

A better option would probably be to have your two hosts in a cluster, then you can simply migrate the container between them.
 
Hi Stefan,
I do have my two nodes in a cluster, but pve2 uses different hardware than pve1, and I wanted to verify whether it is possible to restore the LXC container on a different hardware without issues.
(The answer is: yes, it works. After the above "hack" with --rootfs, I was able to restore the LXC container and start it. It worked out-of-the-box - but actually that does not surprise me too much, I am already quite used to the fact that almost everything in Proxmox just works :cool: ).

However I don't understand the issue with ZFS. On my original installation on pve1, I have the container's volume on tank/ct-105-disk-0, and if I understand you correctly, the restore will attempt to make a new ZFS volume under /tank/ct-nnn-disk-0, where nnn is the new ID of the container. I have configured a zpool called tank on both my nodes, pve1 and pve2, so actually the creation of the ZFS volume should be successful.
 
The issue is that, as far as I'm aware, the code currently checks if the specified volume (storage/ct-<vmid from backup>-disk-n) exists, and if it doesn't it creates it - however, it creates it on the default storage, which can be different from what was specified (in your example that would not be 'tank' but your local PVE root).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!