Hi, thanks for this tutorial, I followed it and was able to mount my OMV NAS within my plex container. I can see all of the files mounted under /mnt/nas. However withinm Plex the files don't show up. What could be the issue?
Can you mount the NAS with NFS directly under the LXC container? We already had the problem, NAS was not accessible and after a reboot from the PVE the server no longer started because it was looking for the NAS in the fstab.
SOLUTION (I think) to the 'can't access media libraries via the Jellyfin web admin'. There are a few similar posts on this thread where people are stuck.
After going around in circles updating files and permissions mentioned in the guide I ticked the following in the PVE GUI for the LXC. From then Jellyfin web was able to see past /mnt/nas into the media.
Key bit missing for me appears to have been setting the Mount Options: = nosuid (no user ID)
SOLUTION (I think) to the 'can't access media libraries via the Jellyfin web admin'. There are a few similar posts on this thread where people are stuck.
After going around in circles updating files and permissions mentioned in the guide I ticked the following in the PVE GUI for the LXC. From then Jellyfin web was able to see past /mnt/nas into the media.
Key bit missing for me appears to have been setting the Mount Options: = nosuid (no user ID)
Do you by any chance run docker inside that LXC?
I think the only people with access problems are doing this.
Because for me there never have been any issues no matter which applications inside the LXC needs access to the share.
I am however not using docker, instead I directly install those applications (f.e. plex) inside the LXC.
Do you by any chance run docker inside that LXC?
I think the only people with access problems are doing this.
Because for me there never have been any issues no matter which applications inside the LXC needs access to the share.
I am however not using docker, instead I directly install those applications (f.e. plex) inside the LXC.
Firstly - thank you for writing up the tutorial. It is clearly written up and easy to follow. I just got stuck when things did not go exactly as specified as I'm not super expert.
_______________
No docker in use.
Jellyfin LXC setup script used from tteck.github.io/Proxmox/. SMB/CIFS in a TrueNAS VM on the same host and NW bridge.
The issue was that the LXC jellyfin user (via jellyfin web-admin portal) could not to see the past the /mnt/nas/ folder.
Whilst the root user (in terminal) could see the media files past /mnt/nas/. Which is likely to mean the UIDs/GIDs between LXC and host where not aligned.
Tried a few ways to change the permissions (chown, chmod, change the permissions levels in fstab up to 777) etc but no dice.
I was about to investigate setting the UID/GID mappings that you can set between LXC and host though manged to get it working before that.
Complicating factors:
* CIFS user ('user_video') is read-only in TrueNAS... maybe the lack of Execute for dir lookups was a sticking point.
Around and around I went... could not get the Jellyfin web-admin portal to see the media
Then I stumbled across the mnt point in the PVE LXC settings and ticked the following.
* Read Only = ticked
* Mount Options = nosuid (no user ID)
This resulted in the following config line in LXC.conf mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,mountoptions=nosuid,ro=1
{ echo '' ; echo '# Mount CIFS share on demand with ro permissions for use in LXCs (manually added)' ; echo '//192.168.1.XX/Video/ /mnt/cifs_shares/nas_ro cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0664,file_mode=0664,user=XXXXXX,pass= XXXXXX 0 0' ; } | tee -a /etc/fstab
mount /mnt/cifs_shares/nas_ro
systemctl daemon-reload
[1st try] { echo 'mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,ro=1' ; } | tee -a /etc/pve/lxc/XXX.conf
[Nth try]
Add to /etc/pve/lxc/XXX.conf
mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,mountoptions=nosuid,ro=1
I was playing around with some IP settings and changed my NAS ip. Due to the nature of this Bind, the Jellyfin server would fail to start due to the missing MP0. Lol you don't want to know how long it took me to figure out why this broke... =D
MY question would be, can I change this behavior to where regardless of if the LXC sees the bind, it will still boot? I know it would be kind of silly to have a Jellyfin server with nothing to serve, but in the name of science!
Firstly - thank you for writing up the tutorial. It is clearly written up and easy to follow. I just got stuck when things did not go exactly as specified as I'm not super expert.
_______________
No docker in use.
Jellyfin LXC setup script used from tteck.github.io/Proxmox/. SMB/CIFS in a TrueNAS VM on the same host and NW bridge.
The issue was that the LXC jellyfin user (via jellyfin web-admin portal) could not to see the past the /mnt/nas/ folder.
Whilst the root user (in terminal) could see the media files past /mnt/nas/. Which is likely to mean the UIDs/GIDs between LXC and host where not aligned.
Tried a few ways to change the permissions (chown, chmod, change the permissions levels in fstab up to 777) etc but no dice.
I was about to investigate setting the UID/GID mappings that you can set between LXC and host though manged to get it working before that.
Complicating factors:
* CIFS user ('user_video') is read-only in TrueNAS... maybe the lack of Execute for dir lookups was a sticking point.
Around and around I went... could not get the Jellyfin web-admin portal to see the media
Then I stumbled across the mnt point in the PVE LXC settings and ticked the following.
* Read Only = ticked
* Mount Options = nosuid (no user ID)
This resulted in the following config line in LXC.conf mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,mountoptions=nosuid,ro=1
{ echo '' ; echo '# Mount CIFS share on demand with ro permissions for use in LXCs (manually added)' ; echo '//192.168.1.XX/Video/ /mnt/cifs_shares/nas_ro cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0664,file_mode=0664,user=XXXXXX,pass= XXXXXX 0 0' ; } | tee -a /etc/fstab
mount /mnt/cifs_shares/nas_ro
systemctl daemon-reload
[1st try] { echo 'mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,ro=1' ; } | tee -a /etc/pve/lxc/XXX.conf
[Nth try]
Add to /etc/pve/lxc/XXX.conf
mp0: /mnt/cifs_shares/nas_ro/,mp=/mnt/nas,mountoptions=nosuid,ro=1
Thanks a lot for this awesome Tutorial. I managed to do all steps and see my CIF shared folder with all contents in my mounted folder inside of the LXC container command line. But somehow the app itself of the lXC (in my case plex) does not see subfolders or files inside the mounted folder. I suspect something with the rights is not correct but I am still very much a beginner in Linux and proxmox, I hope someone can help me:
my fstab file on the PVE looks like this:
Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=E88B-1FA2 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
# Mount CIFS share on demand with rwx permissions for use in LXCs (manually added)
//192.168.1.37/media /mnt/lxc_shares/nas_rwx cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0770,file_mode=0770,user=XXX,pass=XXX 0 0
# Mount CIFS share on demand with rwx permissions for use in LXCs (manually added)
//192.168.1.37/torrents /mnt/lxc_shares/torrents_rwx cifs _netdev,x-systemd.automount,noatime,uid==100000,gid=110000,dir_mode=0770,file_mode=0770,user=XXX,pass=XXX 0 0
I have this exact problem on the Plex lxc from the lxc helper scripts, I cannot seem to fix it no matter what I do. Did you manage to find the solution?
Better not to use a lxc mp. Just mount the volume under the containers disk on the host. It seems to be a lot more reliable and avoids the extra processing and configuration overheads.
I am having the EXACT same problem as Dragons. I did have it working at one point and I dont know what changed. I virtually have all permissions turned off in Turnkey File Server, like anyone can walk right in and start watching any of the content except for literally any media server I install I guess. I have a strong grasp of linux things and I feel like I understand what we are trying to do here with the permissions but I cant get it to work.
Im going to blow away my file server I guess I dont know what else to do besides throwing grenades at my server and hope for the best
SOLVED THIS. SIMPLE CONFIG ERRORS ON MY PART
Related to permissions on renderD128 and the "vLAN Aware" Setting on the network bridge in ProxMox.
Thank you @TheHellSite for getting this tutorial out there for us all to learn from.
I followed, and had it working great on Proxmox VE 7.4-16 for Frigate NVR, with Coral TPU USB passed through as well.
Time to move to a new machine, so I ran a fresh backup of the LXC, restored it to a new Proxmox VE 8.1.10 (without starting the LXC yet). Then I followed the steps below on the PVE Host.
Create the mount point on the PVE host. mkdir -p /mnt/lxc_shares/nas_rwx
Add NAS CIFS share to /etc/fstab.
_netdev Forces systemd to consider the mount unit a network mount. x-systemd.automount Automatically remounts the CIFS share in case the NAS went offline for some time. noatime Access timestamps are not updated when a file/folder is read. uid=100000,gid=110000 See part "How does it work?" paragraph two for explanation. dir_mode=0770,file_mode=0770 Only that uid/gid will have rwx access to the share. (PVE root user always has rwx to everything.)
!!! Adjust //NAS/nas/ in the middle of the command to match your CIFS hostname (or IP) //NAS/ and the share name /nas/. !!!
!!! Adjust user=smb_username,pass=smb_password at the end of the command. !!!
Code:
{ echo '' ; echo '# Mount CIFS share on demand with rwx permissions for use in LXCs (manually added)' ; echo '//NAS/nas/ /mnt/lxc_shares/nas_rwx cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0770,file_mode=0770,user=smb_username,pass=smb_password 0 0' ; } | tee -a /etc/fstab
Mount the share on the PVE host. mount /mnt/lxc_shares/nas_rwx
Add a bind mount of the share to the LXC config. !!! Adjust the LXC_ID at the end of the command. !!!
Code:
You can mount it in the LXC with read+write+execute (rwx) permissions.
{ echo 'mp0: /mnt/lxc_shares/nas_rwx/,mp=/mnt/nas' ; } | tee -a /etc/pve/lxc/LXC_ID.conf
You can also mount it in the LXC with read-only (ro) permissions.
{ echo 'mp0: /mnt/lxc_shares/nas_rwx/,mp=/mnt/nas,ro=1' ; } | tee -a /etc/pve/lxc/LXC_ID.conf
I assume the Section 1 steps from your tutorial would already be done inside the restored LXC.
Since the LXC wouldn't start. I rebooted ProxMox VE. After reboot the mount works, and I can view all the files from the Proxmox Shell just like I could on the old machine. When I try to start the LXC, I fail with this error in the system log.
Code:
Apr 11 19:26:09 pve3 pvedaemon[13478]: starting CT 506: UPIDve3:000034A6:0004FCF6:66189BC1:vzstart:506:root@pam:
Apr 11 19:26:09 pve3 pvedaemon[1190]: <root@pam> starting task UPIDve3:000034A6:0004FCF6:66189BC1:vzstart:506:root@pam:
Apr 11 19:26:09 pve3 pvedaemon[13478]: explicitly configured lxc.apparmor.profile overrides the following settings: features:nesting, features:mount
Apr 11 19:26:09 pve3 systemd[1]: Started pve-container@506.service - PVE LXC Container: 506.
Apr 11 19:26:10 pve3 pvedaemon[13478]: startup for container '506' failed
Apr 11 19:26:10 pve3 pvedaemon[1190]: <root@pam> end task UPIDve3:000034A6:0004FCF6:66189BC1:vzstart:506:root@pam: startup for container '506' failed
Apr 11 19:26:10 pve3 pvedaemon[1191]: unable to get PID for CT 506 (not running?)
Apr 11 19:26:11 pve3 systemd[1]: pve-container@506.service: Main process exited, code=exited, status=1/FAILURE
Apr 11 19:26:11 pve3 systemd[1]: pve-container@506.service: Failed with result 'exit-code'.
I am using ZFS for the PVE boot drive and CT Volume now instead of ext4
CPU is now i7-1185G7 instead of i5-12400, but I don't think that is the issue
UPDATE:
I found my problem. Stupid oversight really on my part. I forgot that I had to give permissions to /dev/dri/renderD128 in order to share the iGPU with Frigate, and on top of that, an even more silly error, I didn't make the Linux Bridge vLan aware, and this LXC CT runs on a tagged vLAN.
Hello guys! Thanks for this tutorial, I hope it helps me!
According to the step number 2:
Add the user(s) that need access to the CIFS share to the group "lxc_shares".
f.e.: jellyfin, plex, ... (the username depends on the application)
usermod -aG lxc_shares USERNAME
You have to execute usermod -aG lxc_shares USERNAME with the username for sonarr, radarr, etc. In my case the Sonarr user is "abc" and it does not exists in the LXC but in the Sonarr docker container.
Should I create a user in the LXC called 'abc' and then add it to the group using usermod -aG lxc_shares abc? Is that correct?
EDIT: I did what @bnhf said here and it worked for me. In summary:
Hi, sorry this is late but your guide has worked very well for me however I have a problem, which I would love to solve yesterday when I set it up it was going fine but if I shut down the proxmox host and restart it it does not mount my
Bash:
/mnt/jellyfin_share
resource so the LXC (which I have configured to start during boot) gives the error and does not start, the error is:
Code:
run_buffer: 571 Script exited with status 19
lxc_init: 845 Failed to run lxc.hook.pre-start for container “104”
__lxc_start: 2034 Failed to initialize container “104”
TASK ERROR: startup for container '104' failed
and I think the error is because of that, if in the next host I do:
Bash:
umount /mnt/jellyfin_share
mount /mnt/jellyfin_share
it mounts without problem and if I do df -h I see the mounted resource and if I turn on the LXC it does not give error, how can I fix it?
I modified it with this post as it said it helped it to mount when started and unmount if in 30 seconds there is nothing but it doesn't work just start the proxmox host I hope it starts the lxc and everything but already at first if I do ls -l /mnt/jellyfin_share I already see that it is not mounted, hopefully you can help me.
Hoping someone can give me some guidance. I have been able to map my share on the PVE Host just fine. And create the bind for my LXC container and I can see the mounted share all good. I can read all the files inside no worries. However it seems I cannot write to the folder despite mounting it using the RWX version of the echo command.
The share in question is hosted on TrueNAS. I have set the owner to root and applied 777 permissions just to test access and still can't get through. I even tried mapping to another NAS which is Synology and getting the same thing. So wondering if it's more something to do with the host rather than the share itself. Not sure what to check next in the logical chain here.
Just to add incase somone comes here with a similar problem
My LXC Container was running as a file and the mounts worked fine, however once i moved the LXC container to be a zvol on the ZFS storage, for some reason the mounts stopped working
Converting it back to file based disk fixed this for some reason.
Hoping someone can give me some guidance. I have been able to map my share on the PVE Host just fine. And create the bind for my LXC container and I can see the mounted share all good. I can read all the files inside no worries. However it seems I cannot write to the folder despite mounting it using the RWX version of the echo command.
Since unprivileged LXCs are not allowed to mount CIFS shares and priviliged LXCs are considered unsafe (for a reason) I was scraping my head around how to still have my NAS shares available in my LXCs, f.e. (Jellyfin, Plex, ...).
How does it work?
By default CIFS shares are mounted as user root(uid=0) and group root(gid=0) on the PVE host which makes them inaccessible to other users,groups and LXCs.
This is because UIDs/GIDs on the PVE host and LXC guests are both starting at 0. But a UID/GID=0 in an unprivileged LXC is actually a UID/GID=100000 on the PVE host. See the above Proxmox Wiki link for more information on this. @Jason Bayton's solution was to mount the share on the PVE host with the UID/GID of the LXC-User that is going to access the share. While this is working great for a single user it would not work for different LXCs with different users having different UIDs and GIDs. I mean it would work, but then you would have to create a single mount entry for your CIFS share for each UID/GID.
My solution is doing this slightly different and more effective I think.
You simply mount the CIFS share to the UID that belongs to the unprivileged LXC root user, which by default is always uid=100000.
But instead of also mounting it to the GID of the LXC root user, your are going to create a group in your LXC called lxc_cifs_shares with a gid=10000 which refers to gid=110000 on the PVE host. PVE host (UID=100000/GID=110000) <--> unprivileged LXC (UID=0/GID=10000)
How to configure it
1. In the LXC (run commands as root user)
Create the group "lxc_shares" with GID=10000 in the LXC which will match the GID=110000 on the PVE host. groupadd -g 10000 lxc_shares
Add the user(s) that need access to the CIFS share to the group "lxc_shares".
f.e.: jellyfin, plex, ... (the username depends on the application) usermod -aG lxc_shares USERNAME
Shutdown the LXC.
2. On the PVE host (run commands as root user)
Create the mount point on the PVE host. mkdir -p /mnt/lxc_shares/nas_rwx
Add NAS CIFS share to /etc/fstab.
_netdev Forces systemd to consider the mount unit a network mount. x-systemd.automount Automatically remounts the CIFS share in case the NAS went offline for some time. noatime Access timestamps are not updated when a file/folder is read. uid=100000,gid=110000 See part "How does it work?" paragraph two for explanation. dir_mode=0770,file_mode=0770 Only that uid/gid will have rwx access to the share. (PVE root user always has rwx to everything.)
!!! Adjust //NAS/nas/ in the middle of the command to match your CIFS hostname (or IP) //NAS/ and the share name /nas/. !!!
!!! Adjust user=smb_username,pass=smb_password at the end of the command. !!!
Code:
{ echo '' ; echo '# Mount CIFS share on demand with rwx permissions for use in LXCs (manually added)' ; echo '//NAS/nas/ /mnt/lxc_shares/nas_rwx cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0770,file_mode=0770,user=smb_username,pass=smb_password 0 0' ; } | tee -a /etc/fstab
Mount the share on the PVE host. mount /mnt/lxc_shares/nas_rwx
Add a bind mount of the share to the LXC config. !!! Adjust the LXC_ID at the end of the command. !!!
Code:
You can mount it in the LXC with read+write+execute (rwx) permissions.
{ echo 'mp0: /mnt/lxc_shares/nas_rwx/,mp=/mnt/nas' ; } | tee -a /etc/pve/lxc/LXC_ID.conf
You can also mount it in the LXC with read-only (ro) permissions.
{ echo 'mp0: /mnt/lxc_shares/nas_rwx/,mp=/mnt/nas,ro=1' ; } | tee -a /etc/pve/lxc/LXC_ID.conf
i have followed this to the letter, even wiping the LXC and re-creating and re-doing it by the letter, but i still get "permission denied" when trying to write to the nas share from within the LXC.
on the pve itself i can read and write to the share, however on the LXC i can only read.
what might issue be?
i assume it must be usermod -aG lxc_shares USERNAME (i used root as the username) causing the issue? as that seems to be what the username is for the LXC (it's sabnzbd created using the tteck script)
Hi, I'm a little confused here. So I run the directory creation exactly as written and the folder doesn't get created. I've used ls -a //mnt to see if it makes it as a hidden folder but nada. Am I supposed to structure it exactly like that or can I rename the share so the command could be more mkdir -p /mnt/lxc_shares/mynas? Or something like that?
I also run my pve host in a cluster and I'm going to need the share to be accessible to all nodes. I think this should be simple enough to retrospectively declare via the webui or apply the config to each node once the first one works?
I found it rough setting these up with read/write access. Ended up finding a script someone else wrote that makes it easy: https://gist.github.com/NorkzYT/14449b247dae9ac81ba4664564669299
Make sure the container is running when you run the script. Works great! Cudos to the author.
Too bad its not part of the GUI. Lot of hoops for something I needed to move a lot of VMs to containers.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.