How to configure file server with external data volume for backup?

wbk

Renowned Member
Oct 27, 2019
259
42
68
Hi all,

I created an NFS file server with the container on SSD and the data on HDD. This way the container is somewhat snappy, with the HDD providing bulk.

Using bind mounts, I face difficulties backing up the complete container including data to PBS.

How should I compose the container, so that when I restore it, it is a working NFS server complete with data? The split over multiple volumes does not need to be preserved when restoring.

Storage-wise, this is on the host:

Bash:
# lsblk
NAME                           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                              8:0    0 465.8G  0 disk
├─sda1                           8:1    0     3M  0 part
├─sda2                           8:2    0  14.9G  0 part
│ ├─mitwe-mtroot               252:1    0   1.5G  0 lvm  /
│ ├─mitwe-mtusr                252:2    0   6.1G  0 lvm  /usr
│ ├─mitwe-mtboot               252:3    0   476M  0 lvm  /boot
│ └─mitwe-mtvar                252:4    0   3.3G  0 lvm  /var
├─sda3                           8:3    0    40G  0 part
│ ├─ctvmroots-vm--100--disk--0 252:0    0     8G  0 lvm 
│ ├─ctvmroots-vm--103--disk--0 252:5    0     4G  0 lvm 
│ ├─ctvmroots-vm--105--disk--0 252:13   0     4G  0 lvm 
│ └─ctvmroots-vm--102--disk--0 252:14   0    20G  0 lvm 
├─sda4                           8:4    0     3G  0 part /mnt
└─sda5                           8:5    0   1.9G  0 part [SWAP]
sdb                              8:16   0 465.8G  0 disk
sdc                              8:32   0   1.8T  0 disk
└─sdc1                           8:33   0   1.8T  0 part
  └─nc--data-nc--rest          252:11   0   900G  0 lvm  /mnt/nc-rest
sdd                              8:48   0   1.8T  0 disk
└─sdd1                           8:49   0   1.8T  0 part
  └─nc--data-nc--linh          252:7    0   1.8T  0 lvm  /mnt/nc-linh
sde                              8:64   0   256M  1 disk
└─sde1                           8:65   0   251M  1 part
# pvs
  PV         VG        Fmt  Attr PSize   PFree   
  /dev/sda2  mitwe     lvm2 a--  <14.90g   <3.59g
  /dev/sda3  ctvmroots lvm2 a--  <40.00g   <4.00g
  /dev/sdc1  nc-data   lvm2 a--   <1.82t <959.68g
  /dev/sdd1  nc-data   lvm2 a--   <1.82t       0
# vgs
  VG        #PV #LV #SN Attr   VSize   VFree   
  ctvmroots   1   4   0 wz--n- <40.00g   <4.00g
  mitwe       1   4   0 wz--n- <14.90g   <3.59g
  nc-data     2   2   0 wz--n-   3.63t <959.68g
# lvs
  LV            VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-100-disk-0 ctvmroots -wi-a-----   8.00g                                                   
  vm-102-disk-0 ctvmroots -wi-ao----  20.00g                                                   
  vm-103-disk-0 ctvmroots -wi-ao----   4.00g                                                   
  vm-105-disk-0 ctvmroots -wi-ao----   4.00g                                                   
  mtboot        mitwe     -wi-ao---- 476.00m                                                   
  mtroot        mitwe     -wi-ao----  <1.49g                                                   
  mtusr         mitwe     -wi-ao----   6.09g                                                   
  mtvar         mitwe     -wi-ao----   3.26g                                                   
  nc-linh       nc-data   -wi-ao----  <1.82t                                                   
  nc-rest       nc-data   -wi-ao---- 900.00g

This is how the container looks like:

Bash:
# pct config 103
arch: amd64
cores: 2
features: mount=nfs,nesting=1
hostname: nc-rest
memory: 1024
mp0: /mnt/nc-rest,mp=/srv/nc_online
net0: name=eth0,bridge=vmbr1,gw=172.26.1.1,gw6=2a10:3781:2d49:172:26:1:1:0,hwaddr=BC:24:11:02:4B:71,ip=172.26.3.103/16,ip6=2a10:3781:2d49:172:26:3:103:0/128,type=veth
ostype: debian
rootfs: ctvmroots:vm-103-disk-0,size=4G
swap: 512
lxc.mount.entry: nfsd proc/fs/nfsd nfsd defaults 0 0

In the documents I found, belatedly, that bind mounts are not backed up. Just above that, I read:

Storage Backed Mount Points​


Storage backed mount points are managed by the Proxmox VE storage subsystem and come in three different flavors:

  • Image based: these are raw images containing a single ext4 formatted file system.
  • ZFS subvolumes: these are technically bind mounts, but with managed storage, and thus allow resizing and snapshotting.
  • Directories: passing size=0 triggers a special case where instead of a raw image a directory is created.

At https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_pbs I could find which storage type is suitable as backup target, but not which (or how) I can include in a backup.


While writing this, I come to realize that perhaps the container itself needs to be installed on the HDD allowing for single volume backup, perhaps using the SSD as cache.

Is my current way of separating container and data a dead end if I want to have an integrated backup?