LXC backup and multiple raw disks

NeySlim

Active Member
May 3, 2016
4
0
41
40
Hi everyone,

We actually face a problem for scheduling backups of lxc containers with a second raw disk as a mountpoint (mp0).

vzdump seems to ignore this and only backup the raw rootfs. Even with options like backup=1 or backup=yes specified.

Here is the container conf:

root@pmx1:~# cat /etc/pve/lxc/100.conf
arch: amd64
cpulimit: 4
cpuunits: 1024
hostname: nginx-master
memory: 8192
mp0: local:100/vm-100-disk-2.raw,mp=/elq,size=100G,backup=1
net0: bridge=vmbr0,gw=10.101.240.254,hwaddr=36:65:36:63:34:63,ip=10.101.240.11/24,name=eth0,type=veth
onboot: 1
ostype: debian
rootfs: local:100/vm-100-disk-1.raw,size=8G
swap: 2048


Is there an option to force this somewhere, could'nt find a precise answer on the forum. I'm pretty sure this only works when mount point is not raw but on the host filesystem.

We're on a fresh 4.2 install

Thanks in advance.
 
works without a problem here.

could you post
  • the output of "pveversion -v"
  • the content of "/etc/pve/storage.cfg"
  • the content of "/etc/vzdump.conf"
  • the exact vzdump command you are trying to use
 
works without a problem here.

could you post
  • the output of "pveversion -v"
  • the content of "/etc/pve/storage.cfg"
  • the content of "/etc/vzdump.conf"
  • the exact vzdump command you are trying to use

As you requested:

root@pmx1:~# pveversion -v
proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-14
pve-container: 1.0-62
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie



root@pmx1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,vztmpl,rootdir
maxfiles 1





root@pmx1:~# cat /etc/vzdump.conf
# vzdump default settings

#tmpdir: DIR
#dumpdir: DIR
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: KBPS
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
#size: MB
#maxfiles: N
#script: FILENAME
#exclude-path: PATHLIST




for the used command, we just use the promox webui backup button, or vdump vmid.
 
just to rule out any misunderstandings - when you say that vzdump does not backup the mountpoint, you mean you checked in the backup archive that the content of the mountpoint is not contained inside? maybe you expected the rootfs and mps to be backed up separately - this is not the case ;) as an example, I defined a mp0 like this:
Code:
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: backuptest
memory: 512
mp0: local:103/vm-103-disk-2.raw,mp=/mnt/somepath,size=1G,backup=1
net0: bridge=vmbr0,hwaddr=3A:32:35:39:37:61,ip=dhcp,ip6=dhcp,name=eth0,type=veth
ostype: debian
rootfs: local:103/vm-103-disk-1.raw,size=8G
swap: 512

I created a test file "test" inside that mountpoint, and used vzdump to create a backup (uncompressed in this case). Using "tar tf", I can check which paths are contained in the backup archive:
Code:
tar tf /backups/dump/vzdump-lxc-103-2016_05_04-11_47_31.tar | grep somepath
./mnt/somepath/
./mnt/somepath/
./mnt/somepath/test

in your case, you can only actually use suspend and stop mode backups (snapshots are not supported on "dir" storage), which means that the container's mounted "/" (including any mountpoints that are not excluded) will be rsynced / tarred. as long as you set "backup=1" on your mountpoint and the mountpoint is on a storage managed by proxmox (i.e., it is not a bind or device mountpoint), the mountpoint is not put on the exclude list and thus backed up.
 
Thanks for you replies, sorry for the delay, I was out for several days.
Indeed it actually backups the disk disk, but it restores in only one.
Do you know why it doesnt create a second disk ? If this is just a "missing" feature, will it be implemented in the future ?
 
Thanks for you replies, sorry for the delay, I was out for several days.
Indeed it actually backups the disk disk, but it restores in only one.
Do you know why it doesnt create a second disk ? If this is just a "missing" feature, will it be implemented in the future ?

because mountpoints offer more flexibility than volumes/disks, this is not as straightforward as with Qemu VMs.

to get what you want you can simply
  1. setup the mountpoints (with something like ".new" appended to the mountpoint path) after the restore
  2. mount everything on the host with "pct mount ID" while the container is not running
  3. move the files from the restored path to the ".new" mountpoint path
  4. make sure the restored ("old") paths are really empty
  5. "pct unmount ID"
  6. adapt mountpoints to old paths (without ".new")
  7. start container
in general, this is not so easy because mountpoint paths can be nested with non-compatible options and there might not be a 1:1 mapping between the backed up configuration and the configuration that is used for restoring (it is possible to pass rootfs, storage and mp settings when restoring). since it is pretty easy to work around it manually but very complicated and confusing to automate, it is not automated at the moment.
 
Yeah we already thought to a thing like this :)
Anyway, thanks for everything.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!