[TUTORIAL] Tutorial: Unprivileged LXCs - Mount CIFS shares

Hi, thanks for this tutorial, I followed it and was able to mount my OMV NAS within my plex container. I can see all of the files mounted under /mnt/nas. However withinm Plex the files don't show up. What could be the issue?

Having the same issues its mounted in my sonarr and radarr but cant get it to mount in plex..
 
Can you mount the NAS with NFS directly under the LXC container? We already had the problem, NAS was not accessible and after a reboot from the PVE the server no longer started because it was looking for the NAS in the fstab.
 
Doesn't this method expose root access on the PVE host to from the LXC?

I don't understand, why would it?
Please share your thoughts!

It only grants access to the files having the correct permissions and are mounted to the LXC.
 
Having the same issues its mounted in my sonarr and radarr but cant get it to mount in plex..

SOLUTION (I think) to the 'can't access media libraries via the Jellyfin web admin'. There are a few similar posts on this thread where people are stuck.

After going around in circles updating files and permissions mentioned in the guide I ticked the following in the PVE GUI for the LXC. From then Jellyfin web was able to see past /mnt/nas into the media.

Key bit missing for me appears to have been setting the Mount Options: = nosuid (no user ID)

Working config - lxc.conf

mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,mountoptions=nosuid,ro=1


1710891074482.png
 
SOLUTION (I think) to the 'can't access media libraries via the Jellyfin web admin'. There are a few similar posts on this thread where people are stuck.

After going around in circles updating files and permissions mentioned in the guide I ticked the following in the PVE GUI for the LXC. From then Jellyfin web was able to see past /mnt/nas into the media.

Key bit missing for me appears to have been setting the Mount Options: = nosuid (no user ID)

Working config - lxc.conf

mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,mountoptions=nosuid,ro=1

Do you by any chance run docker inside that LXC?
I think the only people with access problems are doing this.

Because for me there never have been any issues no matter which applications inside the LXC needs access to the share.
I am however not using docker, instead I directly install those applications (f.e. plex) inside the LXC.
 
Do you by any chance run docker inside that LXC?
I think the only people with access problems are doing this.

Because for me there never have been any issues no matter which applications inside the LXC needs access to the share.
I am however not using docker, instead I directly install those applications (f.e. plex) inside the LXC.

Firstly - thank you for writing up the tutorial. It is clearly written up and easy to follow. I just got stuck when things did not go exactly as specified as I'm not super expert.

_______________
No docker in use.
Jellyfin LXC setup script used from tteck.github.io/Proxmox/. SMB/CIFS in a TrueNAS VM on the same host and NW bridge.

The issue was that the LXC jellyfin user (via jellyfin web-admin portal) could not to see the past the /mnt/nas/ folder.
Whilst the root user (in terminal) could see the media files past /mnt/nas/. Which is likely to mean the UIDs/GIDs between LXC and host where not aligned.

Tried a few ways to change the permissions (chown, chmod, change the permissions levels in fstab up to 777) etc but no dice.
I was about to investigate setting the UID/GID mappings that you can set between LXC and host though manged to get it working before that.

Complicating factors:
* CIFS user ('user_video') is read-only in TrueNAS... maybe the lack of Execute for dir lookups was a sticking point.

Around and around I went... could not get the Jellyfin web-admin portal to see the media

Then I stumbled across the mnt point in the PVE LXC settings and ticked the following.
* Read Only = ticked
* Mount Options = nosuid (no user ID)

This resulted in the following config line in LXC.conf
mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,mountoptions=nosuid,ro=1

____________________________________________________

Config Notes File

1. In the LXC (run commands as root user)

  1. groupadd -g 10000 cifs_shares

  2. usermod -aG cifs_shares jellyfin

  3. Shutdown the LXC.
2. On the PVE host (run commands as root user)
  1. mkdir -p /mnt/cifs_shares/nas_ro

  2. { echo '' ; echo '# Mount CIFS share on demand with ro permissions for use in LXCs (manually added)' ; echo '//192.168.1.XX/Video/ /mnt/cifs_shares/nas_ro cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0664,file_mode=0664,user=XXXXXX,pass= XXXXXX 0 0' ; } | tee -a /etc/fstab

  3. mount /mnt/cifs_shares/nas_ro

    systemctl daemon-reload

  4. [1st try] { echo 'mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,ro=1' ; } | tee -a /etc/pve/lxc/XXX.conf

    [Nth try]
    Add to /etc/pve/lxc/XXX.conf
    mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,mountoptions=nosuid,ro=1

  5. Start the LXC
 
I was playing around with some IP settings and changed my NAS ip. Due to the nature of this Bind, the Jellyfin server would fail to start due to the missing MP0. Lol you don't want to know how long it took me to figure out why this broke... =D

MY question would be, can I change this behavior to where regardless of if the LXC sees the bind, it will still boot? I know it would be kind of silly to have a Jellyfin server with nothing to serve, but in the name of science!
 
Firstly - thank you for writing up the tutorial. It is clearly written up and easy to follow. I just got stuck when things did not go exactly as specified as I'm not super expert.

_______________
No docker in use.
Jellyfin LXC setup script used from tteck.github.io/Proxmox/. SMB/CIFS in a TrueNAS VM on the same host and NW bridge.

The issue was that the LXC jellyfin user (via jellyfin web-admin portal) could not to see the past the /mnt/nas/ folder.
Whilst the root user (in terminal) could see the media files past /mnt/nas/. Which is likely to mean the UIDs/GIDs between LXC and host where not aligned.

Tried a few ways to change the permissions (chown, chmod, change the permissions levels in fstab up to 777) etc but no dice.
I was about to investigate setting the UID/GID mappings that you can set between LXC and host though manged to get it working before that.

Complicating factors:
* CIFS user ('user_video') is read-only in TrueNAS... maybe the lack of Execute for dir lookups was a sticking point.

Around and around I went... could not get the Jellyfin web-admin portal to see the media

Then I stumbled across the mnt point in the PVE LXC settings and ticked the following.
* Read Only = ticked
* Mount Options = nosuid (no user ID)

This resulted in the following config line in LXC.conf
mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,mountoptions=nosuid,ro=1

____________________________________________________

Config Notes File

1. In the LXC (run commands as root user)

  1. groupadd -g 10000 cifs_shares

  2. usermod -aG cifs_shares jellyfin

  3. Shutdown the LXC.
2. On the PVE host (run commands as root user)
  1. mkdir -p /mnt/cifs_shares/nas_ro

  2. { echo '' ; echo '# Mount CIFS share on demand with ro permissions for use in LXCs (manually added)' ; echo '//192.168.1.XX/Video/ /mnt/cifs_shares/nas_ro cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0664,file_mode=0664,user=XXXXXX,pass= XXXXXX 0 0' ; } | tee -a /etc/fstab

  3. mount /mnt/cifs_shares/nas_ro

    systemctl daemon-reload

  4. [1st try] { echo 'mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,ro=1' ; } | tee -a /etc/pve/lxc/XXX.conf

    [Nth try]
    Add to /etc/pve/lxc/XXX.conf
    mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,mountoptions=nosuid,ro=1

  5. Start the LXC
I used the same script but then for Plex. Unfortunately this didn't work for me. I still cannot see any files in the folder on the Plex lxc
Thanks a lot for this awesome Tutorial. I managed to do all steps and see my CIF shared folder with all contents in my mounted folder inside of the LXC container command line. But somehow the app itself of the lXC (in my case plex) does not see subfolders or files inside the mounted folder. I suspect something with the rights is not correct but I am still very much a beginner in Linux and proxmox, I hope someone can help me:
my fstab file on the PVE looks like this:
Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=E88B-1FA2 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

# Mount CIFS share on demand with rwx permissions for use in LXCs (manually added)
//192.168.1.37/media /mnt/lxc_shares/nas_rwx cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0770,file_mode=0770,user=XXX,pass=XXX 0 0

# Mount CIFS share on demand with rwx permissions for use in LXCs (manually added)
//192.168.1.37/torrents /mnt/lxc_shares/torrents_rwx cifs _netdev,x-systemd.automount,noatime,uid==100000,gid=110000,dir_mode=0770,file_mode=0770,user=XXX,pass=XXX 0 0

the conf file of the container:
Code:
## Plex LXC
arch: amd64
cores: 2
features: nesting=1
hostname: plex
memory: 2048
mp0: /mnt/lxc_shares/nas_rwx/,mp=/mnt/nas
net0: name=eth0,bridge=vmbr0,hwaddr=5A:F2:89:C9:C3:CE,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-101-disk-0,size=8G
swap: 512
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
The following I see in the LXC container:
Code:
root@plex:/# groups plex
plex : plex video syslog lxc_shares
root@plex:/# cd mnt/nas
root@plex:/mnt/nas# ls -la
total 4
drwxrwx--- 2 100000 110000    0 May  9 09:35 .
drwxr-xr-x 3 root   root   4096 May 11 09:39 ..
drwxrwx--- 2 100000 110000    0 May  8 15:19 books
drwxrwx--- 2 100000 110000    0 May  8 05:23 movies
drwxrwx--- 2 100000 110000    0 May  7 14:26 tvshows

But if I try to access then these folders in the app the nas folder is empty:
View attachment 50272


I have this exact problem on the Plex lxc from the lxc helper scripts, I cannot seem to fix it no matter what I do. Did you manage to find the solution?
 
Better not to use a lxc mp. Just mount the volume under the containers disk on the host. It seems to be a lot more reliable and avoids the extra processing and configuration overheads.
 
I am having the EXACT same problem as Dragons. I did have it working at one point and I dont know what changed. I virtually have all permissions turned off in Turnkey File Server, like anyone can walk right in and start watching any of the content except for literally any media server I install I guess. I have a strong grasp of linux things and I feel like I understand what we are trying to do here with the permissions but I cant get it to work.

Im going to blow away my file server I guess I dont know what else to do besides throwing grenades at my server and hope for the best
 
SOLVED THIS. SIMPLE CONFIG ERRORS ON MY PART
Related to permissions on renderD128 and the "vLAN Aware" Setting on the network bridge in ProxMox.


Thank you @TheHellSite for getting this tutorial out there for us all to learn from.

I followed, and had it working great on Proxmox VE 7.4-16 for Frigate NVR, with Coral TPU USB passed through as well.

Time to move to a new machine, so I ran a fresh backup of the LXC, restored it to a new Proxmox VE 8.1.10 (without starting the LXC yet). Then I followed the steps below on the PVE Host.
2. On the PVE host (run commands as root user)
  1. Create the mount point on the PVE host.
    mkdir -p /mnt/lxc_shares/nas_rwx
  2. Add NAS CIFS share to /etc/fstab.
    _netdev Forces systemd to consider the mount unit a network mount.
    x-systemd.automount Automatically remounts the CIFS share in case the NAS went offline for some time.
    noatime Access timestamps are not updated when a file/folder is read.
    uid=100000,gid=110000 See part "How does it work?" paragraph two for explanation.
    dir_mode=0770,file_mode=0770 Only that uid/gid will have rwx access to the share. (PVE root user always has rwx to everything.)
    !!! Adjust //NAS/nas/ in the middle of the command to match your CIFS hostname (or IP) //NAS/ and the share name /nas/. !!!
    !!! Adjust user=smb_username,pass=smb_password at the end of the command. !!!

    Code:
    { echo '' ; echo '# Mount CIFS share on demand with rwx permissions for use in LXCs (manually added)' ; echo '//NAS/nas/ /mnt/lxc_shares/nas_rwx cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0770,file_mode=0770,user=smb_username,pass=smb_password 0 0' ; } | tee -a /etc/fstab
  3. Mount the share on the PVE host.
    mount /mnt/lxc_shares/nas_rwx
  4. Add a bind mount of the share to the LXC config.
    !!! Adjust the LXC_ID at the end of the command. !!!
    Code:
    You can mount it in the LXC with read+write+execute (rwx) permissions.
    { echo 'mp0: /mnt/lxc_shares/nas_rwx/,mp=/mnt/nas' ; } | tee -a /etc/pve/lxc/LXC_ID.conf
    
    You can also mount it in the LXC with read-only (ro) permissions.
    { echo 'mp0: /mnt/lxc_shares/nas_rwx/,mp=/mnt/nas,ro=1' ; } | tee -a /etc/pve/lxc/LXC_ID.conf
  5. Start the LXC. <----LXC WON'T START
I assume the Section 1 steps from your tutorial would already be done inside the restored LXC.

Since the LXC wouldn't start. I rebooted ProxMox VE. After reboot the mount works, and I can view all the files from the Proxmox Shell just like I could on the old machine. When I try to start the LXC, I fail with this error in the system log.

Code:
Apr 11 19:26:09 pve3 pvedaemon[13478]: starting CT 506: UPIDve3:000034A6:0004FCF6:66189BC1:vzstart:506:root@pam:
Apr 11 19:26:09 pve3 pvedaemon[1190]: <root@pam> starting task UPIDve3:000034A6:0004FCF6:66189BC1:vzstart:506:root@pam:
Apr 11 19:26:09 pve3 pvedaemon[13478]: explicitly configured lxc.apparmor.profile overrides the following settings: features:nesting, features:mount
Apr 11 19:26:09 pve3 systemd[1]: Started pve-container@506.service - PVE LXC Container: 506.
Apr 11 19:26:10 pve3 pvedaemon[13478]: startup for container '506' failed
Apr 11 19:26:10 pve3 pvedaemon[1190]: <root@pam> end task UPIDve3:000034A6:0004FCF6:66189BC1:vzstart:506:root@pam: startup for container '506' failed
Apr 11 19:26:10 pve3 pvedaemon[1191]: unable to get PID for CT 506 (not running?)
Apr 11 19:26:11 pve3 systemd[1]: pve-container@506.service: Main process exited, code=exited, status=1/FAILURE
Apr 11 19:26:11 pve3 systemd[1]: pve-container@506.service: Failed with result 'exit-code'.

From that error, what did I miss?

Some things which are different:
  • I am using ZFS for the PVE boot drive and CT Volume now instead of ext4
  • CPU is now i7-1185G7 instead of i5-12400, but I don't think that is the issue


UPDATE:
I found my problem. Stupid oversight really on my part. I forgot that I had to give permissions to /dev/dri/renderD128 in order to share the iGPU with Frigate, and on top of that, an even more silly error, I didn't make the Linux Bridge vLan aware, and this LXC CT runs on a tagged vLAN.
 
Last edited:
Hello guys! Thanks for this tutorial, I hope it helps me!

According to the step number 2:

Add the user(s) that need access to the CIFS share to the group "lxc_shares".
f.e.: jellyfin, plex, ... (the username depends on the application)
usermod -aG lxc_shares USERNAME

You have to execute usermod -aG lxc_shares USERNAME with the username for sonarr, radarr, etc. In my case the Sonarr user is "abc" and it does not exists in the LXC but in the Sonarr docker container.

Should I create a user in the LXC called 'abc' and then add it to the group using usermod -aG lxc_shares abc? Is that correct?

EDIT: I did what @bnhf said here and it worked for me. In summary:

PVE Host /etc/fstab:
Code:
//192.168.0.50/video /mnt/video cifs credentials=/root/.credentials,_netdev,x-systemd.automount,noatime,uid=101000,gid=101000,dir_mode=0770,file_mode=0770  0 0

/etc/pve/lxc/LXC_ID.conf:
Code:
mp0: /mnt/video/,mp=/mnt/video

And in the docker compose for Sonarr something like this (pay attention to PUID and PGID):

Code:
services:
  sonarr:
    image: lscr.io/linuxserver/sonarr:latest
    container_name: sonarr
    environment:
      - PUID=1000
      - PGID=1100
      - TZ=Etc/UTC
    volumes:
      - /data/sonarr/config:/config
      - /mnt/video:/mnt/video
    ports:
      - 8989:8989
    restart: unless-stopped
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!