It seems that i have a similar issue with jellyfin. I've added the user "jellyfin" to the lxc-shares group (10000), the webpage sees the mounted folder, while being root in the console i have full read/write access, but the library scan doesn't work.i am making some progress on this.. able to get everything working, except for Plex being able to see the contents of the directory. it's weird as i can go to the console of Plex, see the mount, see the files but Plex says the directory is empty. any ideas?
i haven't. pretty much given up. I found it a lot easier to spin up a VM, mount the shares and then everything seems to work. will try again when i have some time off over the holidays. it really shouldn't be so difficult.It seems that i have a similar issue with jellyfin. I've added the user "jellyfin" to the lxc-shares group (10000), the webpage sees the mounted folder, while being root in the console i have full read/write access, but the library scan doesn't work.
Did you find a solution? Maybe i'll add also lxc root to the shared group, but at this point i guess it's almost the same issue of having a privileged container...
I've got the same issue as many others have said in here. I can see the files/folders from the plex LXC as root (via shell), but Plex cannot (via GUI). My SMB LXC was created with cockpit using this guide. That LXC and GUi setup is nice to work with.i am making some progress on this.. able to get everything working, except for Plex being able to see the contents of the directory. it's weird as i can go to the console of Plex, see the mount, see the files but Plex says the directory is empty. any ideas?
root@homelab-n01:~# mount /mnt/lxc_shares/FS01
mount error(111): could not connect to 192.168.1.165Unable to find suitable address.
root@homelab-n01:~# smbclient //192.168.1.165/FS01
Password for [WORKGROUP\root]:
do_connect: Connection to 192.168.1.165 failed (Error NT_STATUS_CONNECTION_REFUSED)
root@homelab-n01:~#
Any more details on this aproach please?Instead of creating the mount point on each PVE host under (2.), use the data center. on item "Storage" do add -> SMB/CIFS
You find mounts on /mnt/pve/[ID you have chosen in the dialog] on each proxmox node. By adding the ",shared=1" after the bind mount in LXC_ID.conf, you can migrate the container.
Well: Step 2. (see the begining of this thread) is replaced so:Any more details on this aproach please?
Sounds MUCH easier.
Made an account to correct this error, in case anyone else stumbles across this thread from a search engine. This allows it to mount the drive as SMB/CIFS in the datacenter for specific storage types, such as iso's, backups, etc. This doesn't accomplish what we're all trying to do in this thread which is to access CIFS shares in unprivileged containers.Well: Step 2. (see the begining of this thread) is replaced so:
View attachment 81937
is replaced by doing an add to "storage" in the datacenter with chosen ID:
View attachment 81939
and
View attachment 81938
The indicated mountpoint is available also when migrating after you manually edit the LXC_ID.conf and add ",shared=1" in the shell beloging to the node where the container resides on in /etc/pve/lxc/[ID of containe].conf:
View attachment 81941
Working solution if your plex wont see the files! Thanks!I ran into this same issue. It's a permissions issue and I fixed it with (change dir_mode and file_mode) :
Code:{ echo '' ; echo '# Mount CIFS share on demand with rwx permissions for use in LXCs (manually added)' ; echo '//NAS/nas/ /mnt/lxc_shares/nas_rwx cifs _netdev,x-systemd.automount,noatime,uid=100000,gid=110000,dir_mode=0777,file_mode=0774,user=smb_username,pass=smb_password 0 0' ; } | tee -a /etc/fstab
Note that I didn't say the way I fixed it is the best/most secure way. Just that plex can see the subfolders now.
It is indeed true that there will be an empty directory (snippets in my case) in the filesystem on the NAS. Also the filesystem shared that way is in principle mountable in al containers running in the datacenter. For me these disadvantages are not blocking, as I privately manage my containers and the advantage is that my NAS is now very easily accessble. Just one mount from the cluster CIFS directory of choice into a local directory (step 4), and adding the ",shared=1" as described in the /etc/pve/lxc/[id].conf of the node un which the container resides.Made an account to correct this error, in case anyone else stumbles across this thread from a search engine. This allows it to mount the drive as SMB/CIFS in the datacenter for specific storage types, such as iso's, backups, etc. This doesn't accomplish what we're all trying to do in this thread which is to access CIFS shares in unprivileged containers.
/etc/fstab
entries below (use the ones from OP)ls -lah /mnt
← mnt
= parent directory of my mount locationnobody:nogroup
sudo usermod -a -G lxc_shares usernamehere
groups username
to see if your user has been added to the groupumount /mount/location/here
umount -l /mount/second/drive
ended up deferring the unmounting, so maybe avoid that..?/etc/fstab
file directly via a separate terminal rather that Proxmox Web UInano /etc/fstab
UUID=XXXXXXXXX /mount/location/here ntfs-3g rw,uid=100000,gid=110000,dmask=0000,fmask=0000,noatime,nofail 0 0
ntfs-3g
? That took me hours to figure was needed for my Debian host setup, so try and install that from your package manager; in my case it was apt install -y ntfs-3g
UUID=XXXX-XXXX /mount/second/drive exfat rw,exec,uid=100000,gid=110000,dmask=0000,fmask=0000,noatime,nofail 0 0
/etc/fstab
, update your system with systemctl daemon-reload
mount -a
to apply all /etc/fstab
entriesls -lah /mount/
(on the parent directory, in case you mounted multiple drives)root:lxc_shares
instead of nobody:nogroup
UUID
/PART-UUID
to ensure they persist, since mounting /dev/sdxX
can fail sometimes if there's ever a disconnect/reconnect/etc/fstab
entry that took me hours to put together so I didn't update dmask
and fmask
to the dir_mode
and file_mode
key=value format, respectively nobody:nogroup
permsIf anyone else is having this issue, I put the PUID and PGID of the docker images as both 0 (which is output of `id root` from the unprivileged docker host). I doubt this is best practice, better would be to add another user to host with less permissions (also added to the LXC group) and use the PUID/PGID of that.
Personally I'm running into permissions issues with this method. I added my user to lxc_shares/10000 but the mounts are coming through as nobody:nogroup (65534:65534) instead, meaning I only have rx permissions. I imagine adding the mount to fstab instead of the GUI would give me more control of perms. Any thoughts on this?Are there disadvantages to this technique that I am not aware of?
We use essential cookies to make this site work, and optional cookies to enhance your experience.