Bind mount not including files in nested ZFS datasets

caius

Member
Sep 26, 2018
3
0
6
42
I have a zpool with nested datasets:
Code:
root@pve1:~# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
vault                     1.50T  2.01T  1.50T  /vault
vault/files               52.8M  2.01T    96K  /vault/files
vault/files/apt-cache     52.6M  2.01T  52.6M  /vault/files/apt-cache
vault/files/webcerts       124K  2.01T   124K  /vault/files/webcerts


They are mounted correctly on the host (note the test.file):
Code:
root@pve1:~# ls -l /vault/files
total 10
drwxr-xr-x 6 150001 150001 6 Sep 26 01:59 apt-cache
-rw-r--r-- 1 150001 150001 0 Sep 26 02:11 test.file
drwxr-xr-x 4 150001 150001 4 Sep 26 01:59 webcerts


and notice the directories (nested datasets) also have contents:
Code:
root@pve1:~# ls -l /vault/files/webcerts
total 1
drwx------ 3 150001 150001 3 Sep 26 01:59 archive
drwx------ 3 150001 150001 3 Sep 26 01:59 live


The top pool is mounted to a container:
Code:
root@pve1:~# cat /etc/pve/lxc/104.conf
arch: amd64
cores: 1
hostname: vault-server
memory: 512
mp0: /vault,mp=/srv/vault
net0: name=eth0,bridge=vmbr4095,hwaddr=xx:xx:xx:xx:xx:xx,ip=dhcp,tag=33,type=veth
net1: name=eth1,bridge=vmbr0192,hwaddr=xx:xx:xx:xx:xx:xx,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: scram1:subvol-104-disk-0,size=2G
swap: 512
lxc.apparmor.profile: lxc-container-default-with-nfsd


But when I enter the container, the contents of the the most nested datasets is not mounted:
Code:
root@pve1:~# pct enter 104

root@vault-server:/# ls -l /srv/vault/files/
total 2
drwxr-xr-x 2 150001 150001 2 Sep 26 05:57 apt-cache
-rw-r--r-- 1 150001 150001 0 Sep 26 06:11 test.file
drwxr-xr-x 2 150001 150001 2 Sep 26 05:57 webcerts

root@vault-server:/# ls -l /srv/vault/files/webcerts
total 0

root@vault-server:/# ls -l /srv/vault/files/apt-cache
total 0


The contents of the deepest datasets are not inside the container.

I have tried unmount and mount of the zpool. Destroying and recreating the datasets. Restarting the container without the mount and making sure there was nothing in the /srv directory.

Any suggestions?
 
Apr 24, 2020
22
4
8
28
@caius, I know I'm necro'ing this thread, but I keep coming across this thread as I do research into exactly the same use case, and since I figured it out, I thought others might benefit.

I got it to work using 9p mounts in a VM, still testing performance, but it's very cool, and mounting / unmounting etc all seamlessly is passed through to the vm.

Since I'm lazy, I'll just post my internal documentation notes below:

Deployment Steps
First create the VM:

Create VM and 9P Mount
Created the VM in Proxmox. Create using Proxmox GUI.
Create 9P QEMU mount
Pass the external ZFS dataset from the host into the VM using a 9P mount:
how to mount a folder passthrough to a linux KVM machine:
add to the qemu .conf file:

FROM: https://forum.proxmox.com/threads/virtfs-virtio-9p-plans-to-incorporate.35315/

note:
/local-hdd/matters/ is where the folder resides on the host
/mnt/matters is where the folder appears inside the vm

add this edited line to /etc/pve/qemu-server/xxx.conf
args: -fsdev local,security_model=passthrough,id=fsdev0,path=/local-hdd/matters/ -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=9pshare,bus=pci.0,addr=0x4


Inside VM's /etc/fstab
9pshare /mnt/matters 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev 0 0
Add to /etc/fstab
9pshare /mnt/matters 9p trans=virtio,version=9p2000.L,nobootwait,rw,_netdev 0 0
You might get an error at VM startup. Just change bus=pci.0,addr=0x4 address. If you get low speed onthe mount, investigate the following:
add to fstab args
msize=262144,posixacl,cache=loose
 
  • Like
Reactions: quadcube
Apr 24, 2020
22
4
8
28
I used a TKL File Server ISO to build the VM, which works beautifully.
Host can manage snapshots, zfs mounts, replication etc, while an unprivileged VM can be connected to isolated VLANs for secure file sharing and joined to a domain without touching the hypervisor.
Permissions seem to be mapped through by the passthrough flag, so it's all quite seamless. Just got it working, so I will probably report back once I've been using it for a while, but since I've been working on this for a while now, I'm a bit giddy that it's finally as I had planned all along.
 
Apr 24, 2020
22
4
8
28
Update: I've since replaced my Fileserver VMs with containers, which generally have a much happier time with mounting nested datasets.

the VM<->9P mount performance I was getting just didn't meet my needs, unfortunately.
The guide referenced by Ricardo worked really well here is the LXC configuration I used:

Code:
arch: amd64
cores: 2
features: mount=nfs,nesting=1
hostname: <REDACTED>
memory: 4096
mp0: local-vmdata:subvol-123-disk-1,mp=/storage/nvme,size=3000G
net0: name=eth0,bridge=vmbr1,firewall=1,gw=<REDACTED>,hwaddr=<REDACTED>,ip=<REDACTED>,tag=100,type=veth
onboot: 1
ostype: ubuntu
protection: 1
rootfs: local-vmdata:subvol-123-disk-0,size=50G
snaptime: 1610639375
swap: 1024
lxc.mount.entry: /matters/ storage/matters none rbind,create=dir,optional 0 0
lxc.mount.entry: /local-hdd-backup/backup/Tools/ storage/tools none rbind,create=dir,optional 0 0

The magic is in the lxc.mount.entry lines. note that there are *many* nested datasets within the 'storage/matters' dataset, and this passes them through to the container nicely.
I think I got this working for an unprivileged container, but since I'm using an NFS server here I had to use a privileged container.

@quadcube you might want to check this out, if you were investigating 9P mounts.
 
  • Like
Reactions: stuckj

stuckj

Member
Apr 9, 2019
7
3
8
43
Update: I've since replaced my Fileserver VMs with containers, which generally have a much happier time with mounting nested datasets.

the VM<->9P mount performance I was getting just didn't meet my needs, unfortunately.
The guide referenced by Ricardo worked really well here is the LXC configuration I used:

Code:
arch: amd64
cores: 2
features: mount=nfs,nesting=1
hostname: <REDACTED>
memory: 4096
mp0: local-vmdata:subvol-123-disk-1,mp=/storage/nvme,size=3000G
net0: name=eth0,bridge=vmbr1,firewall=1,gw=<REDACTED>,hwaddr=<REDACTED>,ip=<REDACTED>,tag=100,type=veth
onboot: 1
ostype: ubuntu
protection: 1
rootfs: local-vmdata:subvol-123-disk-0,size=50G
snaptime: 1610639375
swap: 1024
lxc.mount.entry: /matters/ storage/matters none rbind,create=dir,optional 0 0
lxc.mount.entry: /local-hdd-backup/backup/Tools/ storage/tools none rbind,create=dir,optional 0 0

The magic is in the lxc.mount.entry lines. note that there are *many* nested datasets within the 'storage/matters' dataset, and this passes them through to the container nicely.
I think I got this working for an unprivileged container, but since I'm using an NFS server here I had to use a privileged container.

@quadcube you might want to check this out, if you were investigating 9P mounts.
I can verify that this also works on unprivileged containers. You will have to map IDs the same way you would need to for a mount using proxmox's mp# syntax though.

E.g.,
Code:
lxc.idmap: u 0 100000 65535
lxc.idmap: g 0 100000 65535

See the lxc man pages here: https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html for more info on lxc.idmap. Or, search the proxmox forums...there are several topics about using lxc.idmap.
 
Last edited:
  • Like
Reactions: grepler

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!