LXC Bind Mounts: GUI and Backups

JC Connell

Member
Apr 6, 2016
29
1
23
36
I have a container on my Proxmox host that I use for mediacenter duties. It runs services like Plex. I use a bind mount to attach a ZFS Dataset named /vol1/media to the container as /media. This usually works well except for in the case of backups.

Today I restored the media container from backup but I had to manually recreate the mount point from the command line. I wonder if there is a way to accomplish from the GUI.

Also, regarding the backup flag on the mountpoint. Does that backup the data stored on that mount point or just the line in the container config file?
 
If you set the 'backup' flag, mount point data is included into the backup. But restore simply restores everything into a single directory. You can manually overwrite that on the command line, but not from the GUI.
 
bind mounts are never backed up (the expectation is that those are used for big, share directories that are backed up on their own if necessary). the backup tool tells you about this when bind mounts are encountered.. the config line is backed up and restored (if you restore as root@pam), but the data is not.

the backup flag is only for regular volume mountpoints, which can be excluded from being backed up by setting it to false.

all of the above only applies to the mpX config keys, if you add LXC mount entries behind PVE's back, PVE does not know or care about them ;)
 
fabian, that's actually the behavior I'm looking for. When I restore as root@pam, the line in the config file regarding the LXC bind mounts is not restored. However, I do add these LXC bind mounts manually from the console using vi (mp0=/foo/bar,mp=/foo2).

What do you mean by the mpX config keys? Are you referring to adding them from within the GUI? If so, it doesn't work as expected when I try this. I have a zvol with my media on it. I want the media zvol to be mounted to the containers /media dir.
 
could you please post
  • the output of pveversion -v
  • the configuration of the container
  • the storage configuration (/etc/pve/storage.cfg)
  • the log/output of the backup operation
  • the configuration stored in the backup archive (should be available in the GUI in the backup tab of any container, or via "pvesm extractconfig")
  • how you restore the backup (GUI or exact command line)
  • output of the restore operation
adding the mountpoint manually (with "pct set" or with an editor) is fine, bind mountpoints cannot be added in any other way anyway (because they contain arbitrary host paths, thus they are supposed to be limited to root).
 
pveversion -v
Code:
root@pve1:~# pveversion -v
proxmox-ve: 4.2-51 (running kernel: 4.4.8-1-pve)
pve-manager: 4.2-5 (running version: 4.2-5/7cf09667)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.8-1-pve: 4.4.8-52
lvm2: 2.02.116-pve2
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-43
qemu-server: 4.0-85
pve-firmware: 1.1-8
libpve-common-perl: 4.0-71
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-56
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6-1
pve-container: 1.0-72
pve-firewall: 2.0-29
pve-ha-manager: 1.0-33
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.3-4
lxcfs: 2.0.2-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5.7-pve10~bpo80

Container config(Note, I removed the backup flag because all contents of the media drive where being backed up and I don't have space for that:
Code:
arch: amd64
cpulimit: 6
cpuunits: 4096
hostname: mediaserver
memory: 8192
mp0: /vol1/media,mp=/media
net0: name=eth0,bridge=vmbr2,hwaddr=32:30:61:37:66:34,ip=dhcp,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: volume=r0ssd400gb-zfs:subvol-150-disk-1,size=30G
searchdomain: home.jcconnell.com
swap: 1024

/ect/pve/storage.cfg (side note: Any idea why r0400gbssd wouldn't be listed as active on Node 2?):
Code:
dir: local
        path /var/lib/vz
        maxfiles 10
        content vztmpl,backup

zfspool: r0ssd400gb-zfs
        pool r0ssd400gb
        content images,rootdir
        nodes pve2,pve1
        sparse

zfspool: r0ssd500gb-zfs
        pool rpool
        content rootdir,images
        nodes pve1,pve2
        sparse

I can provide the remainder of the outputs this evening.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!