[SOLVED] Noob question: I can't bind mount an ext4 folder to a privileged nested container

Rikard

New Member
Aug 2, 2019
23
2
3
Hi, I have a privileged nested container and I am struggling to do a bind mount. For all other containers I'm bind mounting ZFS folders without issues.
I can start the container without issues, but when I save things in /mnt/download ,the files are not saved in /mnt/pve/scratch/data Instead, they are saved in the container's rootdisk (locally).
I need to write from within the container to the host local mount /mnt/pve/scratch
There are files being written to the device over nfs perfectly fine, so it's only the container that can't write to it as wanted.

100.cfg
Code:
root@riliprox:~# more 100.conf
arch: amd64
cores: 4
hostname: rtorrent
memory: 32768
mp0: /mnt/pve/scratch/data,mp=/mnt/download,ro=0
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=02:E1:77:ED:C3:C5,ip=192.168.1.147/24,type=veth
ostype: debian
rootfs: containers:subvol-100-disk-0,replicate=0,size=60G
swap: 512
features: nesting=1

mount
Code:
root@riliprox:~# mount |grep scratch
/dev/nvme0n1p1 on /mnt/pve/scratch type ext4 (rw,relatime)

NFS exports
Code:
root@riliprox:~# more /etc/exports
/mnt/pve/scratch/data 192.168.1.122/24(rw,all_squash,async,insecure,no_subtree_check)

storage.cfg
Code:
root@riliprox:/etc/pve# more storage.cfg
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1

zfspool: containers
        pool storage/containers
        content images,rootdir
        sparse 0

dir: atastorecontback
        path /atastorage/backup/contback
        content vztmpl,rootdir,images,backup
        maxfiles 4
        shared 0

dir: scratch
        path /mnt/pve/scratch
        content images,backup,snippets,rootdir,vztmpl,iso
        is_mountpoint 1
        nodes riliprox

pveversion
Code:
root@riliprox:~# pveversion --verbose
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-6 (running version: 6.0-6/c71f879f)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-5.0.18-1-pve: 5.0.18-3
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve2
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-4
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-7
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-64
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-5
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
 
* please post the complete output of mount from the node and the running container
* is /mnt/pve/scratch a ZFS- dataset or a different filesystem?
 
@Stoiko Ivanov Not sure what happened, but I just rebooted the machine after I wrote the message post. It works perfectly and have ever since. This is a non-issue.
 
Glad it worked out!
Please mark the thread as 'SOLVED'.

Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!