Advice for file sharing between containers

Discussion in 'Proxmox VE: Installation and configuration' started by alex3137, Jan 21, 2016.

  1. alex3137

    alex3137 Member

    Joined:
    Jan 19, 2015
    Messages:
    43
    Likes Received:
    3
    Hello.

    I am preparing my migration from Proxmox 3.4 to 4.1 and LXC.

    I have ~10 OpenVZ containers that host multiple web apps and some of them require access to a shared file system to read, copy or delete files. I configured a NFS gateway for that purpose mounted on every container requiering access to the file share. It works great especially with OpenVZ as I can resize disk dynamically via Proxmox web GUI.

    Now moving to LXC, it seems I can create a NFS server in a container but I cannot reduce disk size (from the GUI) nor (apprently) mount NFS shares in another container. It looks like I need to mount the file share on the host and use bind mounts to access it from the container (https://pve.proxmox.com/wiki/LXC_Bind_Mounts).

    I don't know if there is any other way to achieve file sharing between containers. I am looking for something simple to setup and maintain so feel free to share tips or suggest something :)
     
  2. alex3137

    alex3137 Member

    Joined:
    Jan 19, 2015
    Messages:
    43
    Likes Received:
    3
    Thoughts anyone ?
     
  3. starnetwork

    starnetwork Member

    Joined:
    Dec 8, 2009
    Messages:
    363
    Likes Received:
    4
    best suggestion for you is stay on v3.4 for now
     
  4. wbumiller

    wbumiller Proxmox Staff Member
    Staff Member

    Joined:
    Jun 23, 2015
    Messages:
    643
    Likes Received:
    82
    You can explicitly allow NFS in containers by adding another apparmor profile for them. (We are considering shipping one by default but currently this is not the case). So create the following file as /etc/apparmor.d/lxc/lxc-default-with-nfs:
    Code:
    # Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
    # will source all profiles under /etc/apparmor.d/lxc
    
    profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {
      #include <abstractions/lxc/container-base>
    
    # allow NFS (nfs/nfs4) mounts.
      mount fstype=nfs*,
    }
    Then reload the LXC profiles with:
    Code:
    # apparmor_parser -r /etc/apparmor.d/lxc-containers
    Then use the following setting in the container's config:
    Code:
    lxc.aa_profile: lxc-container-default-with-nfs
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    RobFantini and alex3137 like this.
  5. joebaires

    joebaires New Member

    Joined:
    May 20, 2015
    Messages:
    6
    Likes Received:
    1
    I am trying to figure out why it is not allowed by default in the lxc-default-with-mounting profile.

    Do you have any concerns about security?
     
  6. wbumiller

    wbumiller Proxmox Staff Member
    Staff Member

    Joined:
    Jun 23, 2015
    Messages:
    643
    Likes Received:
    82
    Because it just allows ext*, xfs and btrfs mounts:
    Code:
    (...)
      mount fstype=ext*,
      mount fstype=xfs,
      mount fstype=btrfs,
    }
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. joebaires

    joebaires New Member

    Joined:
    May 20, 2015
    Messages:
    6
    Likes Received:
    1
    Yes, I understand that and already added nfs to the default profile.

    My question was if you had any concern about security, so that's why you didn't allow it by default.
     
  8. alex3137

    alex3137 Member

    Joined:
    Jan 19, 2015
    Messages:
    43
    Likes Received:
    3
    Thanks a lot wbumiller it is exactly what I needed.

    Would be nice to ship this profile by default and create an option in the GUI to enable NFS for a container.
     
  9. wbumiller

    wbumiller Proxmox Staff Member
    Staff Member

    Joined:
    Jun 23, 2015
    Messages:
    643
    Likes Received:
    82
    NFS mounts can hang and thereby cause sync()s to hang, too. It doesn't time out by default. So any process that decides to be a nuisance and does a global sync() will then hang, too.

    (Read: it can prevent your host from shutting down).

    NFS is the only reason why `umount --force` exists. Come to think of it, when/if we add an option for this to the GUI it'll probably need more than just the VM.Config.Network permission... *sigh*
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. alex3137

    alex3137 Member

    Joined:
    Jan 19, 2015
    Messages:
    43
    Likes Received:
    3
    Hum I see... cause I wanted to suggest to add a similar option in the GUI to enable a tun device for an OpenVPN server in a container. But I can understand this can be more complicated in the background than just adding some lines in a config file and there are implications I am not aware of.

    Anyway thanks for your support !
     
  11. atc

    atc New Member

    Joined:
    Mar 16, 2017
    Messages:
    4
    Likes Received:
    0
    This no longer appears to work on Proxmox 5. Even after using 'lxc.apparmor.profile'
     
  12. joebaires

    joebaires New Member

    Joined:
    May 20, 2015
    Messages:
    6
    Likes Received:
    1
    I have it working (mounting a remote filesystem via nfs), using 5.0-23 and in the .conf file:

    lxc.aa_profile: unconfined
     
    Ivan Gonzalez likes this.
  13. Ivan Gonzalez

    Ivan Gonzalez Member

    Joined:
    Jan 20, 2014
    Messages:
    76
    Likes Received:
    0

    confirmed (lxc.aa_profile: unconfined) worked for me PVE4.4
     
  14. Pest

    Pest New Member

    Joined:
    Jun 3, 2017
    Messages:
    4
    Likes Received:
    0
    Hi,
    im trying to do similar for proxmox 5.1
    It showed "config error" with
    Code:
    lxc.aa_profile: lxc-container-default-with-nfs
    Looks like LXC key has been changed so I've used
    Code:
    lxc.apparmor.profile: lxc-container-default-with-nfs
    It doesnt show the error, but doesnt start either - console never works, ping - no response.
    What am I missing?

    My goal is to have Ubuntu 16.04 container with automounted NFS at boot.
     
  15. Patrik Dufresne

    Patrik Dufresne New Member

    Joined:
    Jan 1, 2017
    Messages:
    10
    Likes Received:
    0
  16. finlaydag33k

    finlaydag33k Member

    Joined:
    Apr 16, 2015
    Messages:
    45
    Likes Received:
    1
    Add `mount fstype=cgroup -> /sys/fs/cgroup/**,` to your file and you should be good
     
  17. Pedulla

    Pedulla New Member

    Joined:
    Aug 1, 2017
    Messages:
    15
    Likes Received:
    1
    Can someone throw me a bone and be a little more explicit here?

    Is "/sys/fs/cgroup/**," supposed to be a file?
     
  18. finlaydag33k

    finlaydag33k Member

    Joined:
    Apr 16, 2015
    Messages:
    45
    Likes Received:
    1
    Yes, here is an an example profile:
    Code:
    profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {
      #include <abstractions/lxc/container-base>
    
      # allow NFS (nfs/nfs4) mounts.
      mount fstype=nfs*,
      mount fstype=cgroup -> /sys/fs/cgroup/**,
    }
     
  19. Pedulla

    Pedulla New Member

    Joined:
    Aug 1, 2017
    Messages:
    15
    Likes Received:
    1
    Okay, not to leave a thread hang'n, the above 'mount fstype=cgroup' was never resolved and doesn't seem to make a difference.

    As resolved in this thread, the answer is to add the following to your containers .conf file:
    Code:
    lxc.apparmor.profile: lxc-container-default-with-nfs
    lxc.apparmor.profile: unconfined
    I'm using PVE 5.2-5
     
  20. lethargos

    lethargos Member
    Proxmox Subscriber

    Joined:
    Jun 10, 2017
    Messages:
    84
    Likes Received:
    0
    I'm also interested in this. Isn't there a better way of doing this than simply unconfine the container altogether? Isn't this a big security risk and doesn't it mean that you're giving up on some important functions that isolate the container from the host?


    [later edit]
    Actually for me it worked without the unconfined profile. Adding this (mount fstype=cgroup -> /sys/fs/cgroup/**,) seems to have done the job. I'll get back to you after testing it also on other containers.
     
    #20 lethargos, Nov 28, 2018
    Last edited: Nov 28, 2018
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice