Advice for file sharing between containers

alex3137

Member
Jan 19, 2015
43
3
8
Hello.

I am preparing my migration from Proxmox 3.4 to 4.1 and LXC.

I have ~10 OpenVZ containers that host multiple web apps and some of them require access to a shared file system to read, copy or delete files. I configured a NFS gateway for that purpose mounted on every container requiering access to the file share. It works great especially with OpenVZ as I can resize disk dynamically via Proxmox web GUI.

Now moving to LXC, it seems I can create a NFS server in a container but I cannot reduce disk size (from the GUI) nor (apprently) mount NFS shares in another container. It looks like I need to mount the file share on the host and use bind mounts to access it from the container (https://pve.proxmox.com/wiki/LXC_Bind_Mounts).

I don't know if there is any other way to achieve file sharing between containers. I am looking for something simple to setup and maintain so feel free to share tips or suggest something :)
 

starnetwork

Active Member
Dec 8, 2009
375
4
38
Hello.

I am preparing my migration from Proxmox 3.4 to 4.1 and LXC.

I have ~10 OpenVZ containers that host multiple web apps and some of them require access to a shared file system to read, copy or delete files. I configured a NFS gateway for that purpose mounted on every container requiering access to the file share. It works great especially with OpenVZ as I can resize disk dynamically via Proxmox web GUI.

Now moving to LXC, it seems I can create a NFS server in a container but I cannot reduce disk size (from the GUI) nor (apprently) mount NFS shares in another container. It looks like I need to mount the file share on the host and use bind mounts to access it from the container (https://pve.proxmox.com/wiki/LXC_Bind_Mounts).

I don't know if there is any other way to achieve file sharing between containers. I am looking for something simple to setup and maintain so feel free to share tips or suggest something :)
best suggestion for you is stay on v3.4 for now
 

wbumiller

Proxmox Staff Member
Staff member
Jun 23, 2015
647
88
48
You can explicitly allow NFS in containers by adding another apparmor profile for them. (We are considering shipping one by default but currently this is not the case). So create the following file as /etc/apparmor.d/lxc/lxc-default-with-nfs:
Code:
# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

# allow NFS (nfs/nfs4) mounts.
  mount fstype=nfs*,
}
Then reload the LXC profiles with:
Code:
# apparmor_parser -r /etc/apparmor.d/lxc-containers
Then use the following setting in the container's config:
Code:
lxc.aa_profile: lxc-container-default-with-nfs
 

joebaires

New Member
May 20, 2015
6
1
3
I am trying to figure out why it is not allowed by default in the lxc-default-with-mounting profile.

Do you have any concerns about security?
 

wbumiller

Proxmox Staff Member
Staff member
Jun 23, 2015
647
88
48
Because it just allows ext*, xfs and btrfs mounts:
Code:
(...)
  mount fstype=ext*,
  mount fstype=xfs,
  mount fstype=btrfs,
}
 

joebaires

New Member
May 20, 2015
6
1
3
Yes, I understand that and already added nfs to the default profile.

My question was if you had any concern about security, so that's why you didn't allow it by default.
 

alex3137

Member
Jan 19, 2015
43
3
8
Thanks a lot wbumiller it is exactly what I needed.

Would be nice to ship this profile by default and create an option in the GUI to enable NFS for a container.
 

wbumiller

Proxmox Staff Member
Staff member
Jun 23, 2015
647
88
48
Yes, I understand that and already added nfs to the default profile.

My question was if you had any concern about security, so that's why you didn't allow it by default.
NFS mounts can hang and thereby cause sync()s to hang, too. It doesn't time out by default. So any process that decides to be a nuisance and does a global sync() will then hang, too.

(Read: it can prevent your host from shutting down).

NFS is the only reason why `umount --force` exists. Come to think of it, when/if we add an option for this to the GUI it'll probably need more than just the VM.Config.Network permission... *sigh*
 

alex3137

Member
Jan 19, 2015
43
3
8
Hum I see... cause I wanted to suggest to add a similar option in the GUI to enable a tun device for an OpenVPN server in a container. But I can understand this can be more complicated in the background than just adding some lines in a config file and there are implications I am not aware of.

Anyway thanks for your support !
 

atc

New Member
Mar 16, 2017
4
0
1
23
This no longer appears to work on Proxmox 5. Even after using 'lxc.apparmor.profile'
 

Pest

New Member
Jun 3, 2017
4
0
1
34
Hi,
im trying to do similar for proxmox 5.1
It showed "config error" with
Code:
lxc.aa_profile: lxc-container-default-with-nfs
Looks like LXC key has been changed so I've used
Code:
lxc.apparmor.profile: lxc-container-default-with-nfs
It doesnt show the error, but doesnt start either - console never works, ping - no response.
What am I missing?

My goal is to have Ubuntu 16.04 container with automounted NFS at boot.
 

finlaydag33k

Member
Apr 16, 2015
45
1
8
Yes, here is an an example profile:
Code:
profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

  # allow NFS (nfs/nfs4) mounts.
  mount fstype=nfs*,
  mount fstype=cgroup -> /sys/fs/cgroup/**,
}
 

Pedulla

New Member
Aug 1, 2017
15
1
3
Oregon, USA
Okay, not to leave a thread hang'n, the above 'mount fstype=cgroup' was never resolved and doesn't seem to make a difference.

As resolved in this thread, the answer is to add the following to your containers .conf file:
Code:
lxc.apparmor.profile: lxc-container-default-with-nfs
lxc.apparmor.profile: unconfined
I'm using PVE 5.2-5
 
Jun 10, 2017
84
0
6
69
I'm also interested in this. Isn't there a better way of doing this than simply unconfine the container altogether? Isn't this a big security risk and doesn't it mean that you're giving up on some important functions that isolate the container from the host?


[later edit]
Actually for me it worked without the unconfined profile. Adding this (mount fstype=cgroup -> /sys/fs/cgroup/**,) seems to have done the job. I'll get back to you after testing it also on other containers.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!