NFS: mount nfs on startup (mount.point.entry) not working

t_b

New Member
Nov 4, 2015
22
2
1
Hello,

I've multiple lxc containers sharing one directory on a nfs server on another container.
While manual mounting is working, mounting via mount.point.entry on the nfs client seem to be not working.


NFS-Server: 10.0.2.20

NFS-Client config
Code:
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: atlassian-crowd
memory: 2048
nameserver: 10.0.2.2 
net0: bridge=vmbr3,gw=10.0.2.2,hwaddr=BA:1F:21:16:97:57,ip=10.0.2.22/24,name=eth0,type=veth
onboot: 1
ostype: debian
rootfs: local:222/vm-222-disk-1.raw,size=8G
searchdomain: somedomaint
swap: 1024
lxc.mount.entry: 10.0.2.20:/var/atlassian /var/atlassian nfs intr 0 0


Mounts of NFS-Client
Code:
root@atlassian-crowd:/opt/crowd# mount
/images/222/vm-222-disk-1.raw on / type ext4 (rw,relatime,data=ordered)
none on /dev type tmpfs (rw,relatime,size=100k,mode=755)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys/net type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys/devices/virtual/net type sysfs (rw,relatime)
sysfs on /sys/devices/virtual/net type sysfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,size=12k,mode=755)
tmpfs on /sys/fs/cgroup/cgmanager type tmpfs (rw,mode=755)
lxcfs on /proc/cpuinfo type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/diskstats type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/meminfo type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/stat type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /proc/uptime type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/blkio type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/cpu,cpuacct type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/cpuset type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/devices type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/freezer type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/hugetlb type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/memory type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/systemd type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/net_cls,net_prio type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
lxcfs on /sys/fs/cgroup/perf_event type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
devpts on /dev/console type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/tty1 type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/tty2 type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=2469012k,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5242860k)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
10.0.2.20:/var/atlassian on /var/atlassian type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.22,local_lock=none,addr=10.0.2.20)


Maybe i additionally should say, that te Network 10.0.2.0/24 is a container only network and the proxmox host has no access to this network.
 
The network isn't up at the time the lxc mount entries are being mounted. You can try to work with bind mounts for which you need to prepare the nfs mountpoint on your host.
 
The network isn't up at the time the lxc mount entries are being mounted.

ok, thats sounds logical

You can try to work with bind mounts for which you need to prepare the nfs mountpoint on your host.

mmh, can you explain this more detailed? what do i have to to on the nfs server and what on the client? what's the todos on the host (proxmox)?
 
since i've not seen an bind example with nfs, is this the correct syntax for the container:

Code:
lxc.aa_profile: unconfined
lxc.mount.entry: 10.0.2.20:/var/atlassian var/atlassian nfs  bind,create=dir,optional 0 0

Code:
Dec 13 09:57:22 proxmox kernel: [736791.256106] device veth222i0 entered promiscuous mode
Dec 13 09:57:22 proxmox kernel: [736791.305047] EXT4-fs (loop4): couldn't mount as ext2 due to feature incompatibilities
Dec 13 09:57:22 proxmox kernel: [736791.325523] audit: type=1400 audit(1449997042.685:255): apparmor="DENIED" operation="mount" profile="/usr/bin/lxc-start" name="/usr/lib/x86_64-linux-gnu/lxc/rootfs/var/atlassian/" pid=27187 comm="lxc-start" flags="rw, bind"
 
The recommended way to mount a NFS share in a container would be to mount it in the host, and then pass it as a bind mount in the guest
(so you don't need to set
lxc.aa_profile: unconfined)

so add a fstb entry for nfs in your host, and pass the bind mount using the synthax

mp0: /home/nfs_mount_point_host,mp=/var/atlassian,backup=0
 
since host and nfs server/client aren't in the same network, is not an option.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!