LXC backup and restore with configured mountpoints

indi

Renowned Member
Jun 9, 2016
13
1
68
St.Petersburg
I have some Proxmox 4.2 nodes with different hardware configuration. Nodes are not in the clusters, but have common NFS storage for backup and migration. Of course, migration without cluster downtime is inevitable.

For migration from node A to node B:
Code:
nodeA:/# pct shutdown $VM
nodeA:/# vzdump $VM --storage transfer --quiet 1 --compress lzo --mode stop
nodeB:/# pct restore $VM /mnt/pve/transfer/dump/vzdump-lxc-$VM-*.lzo \
         -storage local-lvm
nodeB:/# pct start $VM
nodeA:/# pct destroy $VM
It's OK, but I have some LXC VM with small and large bind mounts, mp0, mp1, ...
Bind mounts not included into backup container. I use rsync:
Code:
nodeA:/# rsync -ar --numeric-ids /mnt/xxx/binds/$VM/home \
        --rsh=ssh $nodeB:/mnt/yyy/binds/$VM/home
Because bind mounts paths are different I add -mpN option into restore command:
Code:
nodeB:/# pct restore $VM /mnt/pve/transfer/dump/vzdump-lxc-$VM-*.lzo \
        -storage local-lvm \
        -mp0 mp=/home,/mnt/yyy/binds/$VM/home
Suddenly receive error: mountpoints configured, but 'rootfs' not set - aborting
Hmm. Add -rootfs option:
Code:
nodeB:/# pct restore $VM /mnt/pve/transfer/dump/vzdump-lxc-$VM-*.lzo \
        -storage local-lvm \
        -mp0 mp=/home,/mnt/yyy/binds/$VM/home \
        -rootfs local-lvm
Another error: unable to parse volume ID 'local-lvm'
Add size for rootfs:
Code:
nodeB:/# pct restore $VM /mnt/pve/transfer/dump/vzdump-lxc-$VM-*.lzo \
        -storage local-lvm \
        -mp0 mp=/home,/mnt/yyy/binds/$VM/home \
        -rootfs local-lvm:4
OK, restored!
I have different sizes, some with 4Gb, some else. I can get rootfs size from source node:
Code:
nodeA:/# pct config $VM | grep 'rootfs: ' | cut -f 2 -d = | cut -f 1 -d G

Maybe someone knows a better solution? It maybe better to unset all bind mounts before dump on node A and set again on node B after restore?
 
if you have identical bind mount paths on both nodes, you would not need to do anything except making sure that they are available (and have the correct content). bind mount contents are skipped when backing up, but the config line is backed up and restored.

there are basically two modes when restoring backups:

"pct restore ID BACKUP [-storage STORAGE]" , which creates all the volume mountpoints configured in the backup on the given storage, restores the data, then adds bind mounts to the config. this is the "simple" mode, which is also used by the GUI.

"pct restore ID BACKUP -rootfs STORAGE:size [-mpX STORAGE:size]" , which will ignore the mountpoint configuration from the backup, and replace it with the one given on the command line, then restore the data, and finally configure the bind mountpoints given on the command line. this "advanced" mode allows to redistribute the files contained in the backup to a different mountpoint configuration, but also offers more control over the storages where mountpoints are restored and volumes sizes.

both modes allow setting almost all the options that you can set with "pct set", but as soon as you set rootfs or mpX , you switch to the advanced mode.

if you cannot have the same bind mount path on both nodes, an alternative would be to use the simple mode to restore:
Code:
pct restore $VM /mnt/pve/transfer/dump/vzdump-lxc-$VM-*.lzo -storage local-lvm

followed by replacing the wrong bind mount with a correct one (before starting the container):

Code:
pct set $VM -mpX /path/on/node,mp=/path/in/container
 
if you cannot have the same bind mount path on both nodes, an alternative would be to use the simple mode to restore:
Code:
pct restore $VM /mnt/pve/transfer/dump/vzdump-lxc-$VM-*.lzo -storage local-lvm

followed by replacing the wrong bind mount with a correct one (before starting the container):

Code:
pct set $VM -mpX /path/on/node,mp=/path/in/container

pct restore exit with error without -mpX advanced options:
Code:
mounting container failed
Logical volume "vm-101-disk-1" successfully removed
directory '/mnt/xxx/binds/101/home' does not exist

But I found solution. Just run 'pct set' on source node with new paths before 'vzdump'. And 'pct restore' can use in simple mode without error!
 
dang, you're right. bugfix is on its way..
 
Unfortunately I'm not found speed limit options at 'pct restore'. Other running VM's feel very slow perfomance with big restored dump.

vzdump and rsync are equipped with --bwlimit <speed>. Howto limit 'pct restore'?
 
there is no such limit currently. you could try slowing it down with ionice (your mileage may vary).

you can also pipe the archive into pct restore with something like "pv -L 50M /path/to/vzdump-archive.tar | pct restore ID - -rootfs storage:5", which limits reading from the archive to 50MiB/s and even displays a progress bar for this ;). note that piping the backup archive requires using the advanced mode (as the configuration cannot be extracted before creating the disks/volumes, so you have to specify them explicitly). the rest of the configuration is restored as usual, so you could probably create some scripted wrapper that extracts the disk configuration from the backup archive (with "pvesm extractconfig"), and then generates a command like the one above that uses the storage and size (and other options) from the backup archive.

since we "only" use tar to extract the backup, there is no builtin bandwidth limiting..
 
Last edited:
Hi!
Even if this is an old topic, I'm having a similar question. I have a container where rootfs is on disk HDD-2-1, additional mountpoint /data is on same disk. I was trying to reduce volume size of rootfs. After restore, with

Code:
--rootfs HDD-2-1:20
the mountpoint /data didn't exist. What would be the correct syntax to reduce the size of rootfs and restore /data?

Edit: Found this working solution.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!