Using Turnkey Fileserver for storage share - major issues

skybolt_1

New Member
Aug 2, 2024
10
1
3
I have a PVE Community server running 8.3.1 with a total of 24 TB across 6 drives configured in RAIDZ2 that I am presenting to a bunch of different VMs, Docker containers, and LXCs using SMB using Turnkey Fileserver. While I understand that SMB isn't really optimal for this use case, and that most people would probably want to present storage using NFS, I was hesitant to use a priviledged container because of the security concerns associated with that. This might be a bit silly because this is a home hosting setup + work sandbox, but I do have a few services (Nextcloud, Jellyfin) presented publicly behind HAProxy and want to use best practices as a general rule.

I run Turnkey as an LXC and present a 20 TB mount point to it (was a typo, I intended to use 10 TB) of which I have around 4 TB consumed. I have set up the SMB shares within Turnkey based on local user groups inside of the LXC. Everything works... for a while. Periodically, whether that is a few days or a week, I suddenly discover that my Shinobi DVR VM has suddenly started throwing endless CIFS: VFS: No writable handle in writepages rc=-9 errors, or my account that I use to back up Windows boxes suddenly no longer has write privileges. A reboot of the LXC fixes everything. I've made no changes to the Samba configuration, which is as follows (slightly truncated to reduce the number of shares, all settings are identical other than groups):

Code:
[global]
    obey pam restrictions = yes
    server string = TurnKey FileServer
    debug level = 3
    min receivefile size = 16384
    os level = 20
    add user script = /usr/sbin/useradd -m '%u' -g users -G users
    recycle:exclude_dir = tmp quarantine
    delete group script = /usr/sbin/groupdel '%g'
    recycle:versions = yes
    socket options = TCP_NODELAY SO_SNDBUF=65536 SO_RCVBUF=65536
    recycle:keeptree = yes
    add group script = /usr/sbin/groupadd '%g'
    workgroup = WORKGROUP
    dns proxy = no
    panic action = /usr/share/samba/panic-action %d
    admin users = root
    log file = /var/log/samba/samba.log
    max log size = 1000
    recycle:touch = yes
    guest account = nobody
    wins support = yes
    map to guest = bad user
    read raw = no
    pam password change = yes
    encrypt passwords = yes
    write raw = no
    delete user script = /usr/sbin/userdel -r '%u'
    security = user
    netbios name = FILESERVER
    vfs object = recycle
    passdb backend = tdbsam
    getwd cache = yes
    add user to group script = /usr/sbin/usermod -G '%g' '%u'
    passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
    
[home-videos]
    write list = @home-videos-users
    create mode = 644
    path = /rust/home-videos
    read list = jellyfin
    force create mode = 0644
    writeable = yes
    directory mode = 775
    force directory mode = 2775

[network-drive]
    write list = @network-drive-users
    directory mode = 775
    writeable = yes
    force directory mode = 2775
    force create mode = 0644
    create mode = 664
    path = /rust/network-drive

This has happened about 10-15 times so far, and it is preventing me from fully jumping off of my legacy server which runs on ESXi + drive passthrough to TrueNAS for storage. That setup had its own issues but I have to say that I did not have anything like this happen with shares in TrueNAS.

It seems like the Turnkey LXCs are relatively popular here, I would expect that issues like this would be fairly rare, yet I seem to have gotten myself into a bad place that I'm uncertain how to get out of. I'm sort of tempted to sidestep into a "plain" Debian LXC and configure Samba manually, but I'm hoping to avoid that headache.

Anyone encounter these sorts of issues before? Also, if you think that what I'm trying to do here is dumb and bad, I'm willing to hear those arguments too!
 
I would try a VM instead, maybe openmediavault or similar? You can do some easy admin stuff just by installing webmin on port 10000, or maybe go with SuSE + Yast for text-console convenience
 
  • Like
Reactions: Johannes S
So I did start out using a VM, Debian w/ Samba deployed natively. But that required me creating a very large disk image vs. using a mountpoint, which appears to have a potentially significant performance hit (?) according to things I've read here and on the subreddit for Proxmox. That's one of the reasons I pulled back from that model to go to the LXC model.
 
  • Like
Reactions: Kingneutron
Yeah, basically back to my prior model w/ TrueNAS + passthrough... I suppose I could. Was hoping that someone here had similar experiences with the Turnkey image and could point me in the right direction but maybe I just have a weird setup.
 
Yeah, basically back to my prior model w/ TrueNAS + passthrough... I suppose I could. Was hoping that someone here had similar experiences with the Turnkey image and could point me in the right direction but maybe I just have a weird setup.
Did you ever get a solution? I can't change or add any folders/files into my file share form the Turnkey File Server. My situation: I have Proxmox server. Ripping discs from my dekstop to TKFS. Then use those folders/files for Jellyfin. I'm having the worst luck trying to figure things out. I cannot add myself to the root group to access these folders. Tried with 'chown -R user:group /path/folder', wouldn't allow the operation.
 
Did you ever get a solution? I can't change or add any folders/files into my file share form the Turnkey File Server. My situation: I have Proxmox server. Ripping discs from my dekstop to TKFS. Then use those folders/files for Jellyfin. I'm having the worst luck trying to figure things out. I cannot add myself to the root group to access these folders. Tried with 'chown -R user:group /path/folder', wouldn't allow the operation.
Ultimately, I ended up creating individual drives for things like Jellyfin and Shinobi and just saving the files to those directly. I'm sure that if I had spent more time I could have gotten things working with ZFS but ultimately decided that for what I was trying to do, keeping things on individual drive images was fine. Sorry!
 
Ultimately, I ended up creating individual drives for things like Jellyfin and Shinobi and just saving the files to those directly.
That will work, but then you loose the redundant drives, one failure and your data is gone.

I have 2x TrueNAS running, but when I just wanted to pass some USB drives through for basic storage, I found the OMV in a VM was the most solid solution, that after testing various LXC files servers and ZimaOS.
 
That will work, but then you loose the redundant drives, one failure and your data is gone.

I have 2x TrueNAS running, but when I just wanted to pass some USB drives through for basic storage, I found the OMV in a VM was the most solid solution, that after testing various LXC files servers and ZimaOS.
You misunderstand my statement; I created virtual disks associated with the VMs that sit on my RAIDZ2 array of 6 physical drives. These disks are backed up nightly to an offsite PBS backup server. I can suffer two physical failures and suffer no data loss and if that were to happen I can fall back to my offsite. Truly irreplaceable content like family photos and home movies have additional copies on my local hard disk, local backup hard disk, and Amazon Glacier.
 
You misunderstand my statement
Ah, sorry, I completely misunderstood the thread, I have a similar setup for none important data storage (even USB drives), OMV is still the best option I found after testing multiple LXC files servers.
 
I have multiple Proxmox Nodes (since version 7) running with turnkey file servers and never had a problem like yours.
But I dont gernerate a storage for the lxc, I manually mount the folder

example, your turnkey is lxc 100 (file-server)
you open the config

nano /etc/pve/lxc/100.conf

arch: amd64
cores: 4
features: mount=nfs;cifs,nesting=1
hostname: file-server
memory: 4096

add the line for the mount point

mp0: /zfs/fileshare,mp=/mnt/zfs
mp1: /mnt/20tb_ext4_or_whatever,mp=/mnt/20tb
...

cons of this method
you can't backup your mountpoint with proxmox backupserver!
and the user IDs have to match the users from your other lxc's
otherwise your jellyfin cant access your homemovies
or immich your pictures
and so on
 
I have multiple Proxmox Nodes (since version 7) running with turnkey file servers and never had a problem like yours.
But I dont gernerate a storage for the lxc, I manually mount the folder

example, your turnkey is lxc 100 (file-server)
you open the config

nano /etc/pve/lxc/100.conf

arch: amd64
cores: 4
features: mount=nfs;cifs,nesting=1
hostname: file-server
memory: 4096

add the line for the mount point

mp0: /zfs/fileshare,mp=/mnt/zfs
mp1: /mnt/20tb_ext4_or_whatever,mp=/mnt/20tb
...

cons of this method
you can't backup your mountpoint with proxmox backupserver!
and the user IDs have to match the users from your other lxc's
otherwise your jellyfin cant access your homemovies
or immich your pictures
and so on
I think I have a similiar configuration running for a long time without serious issues.
However I am still on TKFS 16 (LXC no. 105). I am backing up the Data with rsync to an external USB drive.
I am now experiencing long waiting times whenever MacOS computers try to access a samba share, even displaying the directory takes ages.
Do you have experience, how to upgrade to current TKFS without TKLBAM?
My idea is to create a new LXC for the new TKFS, then copy all settings (please advise which folder / files to copy). Finally I would mount the subvol-105 to the new LXC.
 
I think I have a similiar configuration running for a long time without serious issues.
However I am still on TKFS 16 (LXC no. 105). I am backing up the Data with rsync to an external USB drive.
I am now experiencing long waiting times whenever MacOS computers try to access a samba share, even displaying the directory takes ages.
Do you have experience, how to upgrade to current TKFS without TKLBAM?
My idea is to create a new LXC for the new TKFS, then copy all settings (please advise which folder / files to copy). Finally I would mount the subvol-105 to the new LXC.
just two weeks ago I turned my back on turnkey file server
backuped all settings within webmin

I installed a new debian lxc
https://community-scripts.github.io/ProxmoxVE/scripts?id=debian

integrated webmin (use the script inside your lxc !!! not your proxmox node !!! )
https://community-scripts.github.io/ProxmoxVE/scripts?id=webmin

added my mount points
nano /etc/pve/lxc/100.conf

imported all my settings
restart

now you can enter "update" within the cli for debian
and get updates for webmin within webmin itself
 
just two weeks ago I turned my back on turnkey file server
backuped all settings within webmin

I installed a new debian lxc
https://community-scripts.github.io/ProxmoxVE/scripts?id=debian

integrated webmin (use the script inside your lxc !!! not your proxmox node !!! )
https://community-scripts.github.io/ProxmoxVE/scripts?id=webmin

added my mount points
nano /etc/pve/lxc/100.conf

imported all my settings
restart

now you can enter "update" within the cli for debian
and get updates for webmin within webmin itself
Thanks for this. Unfortunately in TKFS16 Webmin does not provide any backup options, only via TKLBAM. So I am afraid I have to backup all settings (samba, nfs,rsync) manually. And I don't know, where these settings are stored...
 
Samba
nano /etc/samba/smb.conf

NFS
nano /etc/exports

look at your crontab
crontab -e

maybe there are scripts listet which are getting called for your rsync
 
@lxr : Thank you very much, that helped a lot.

I don't see the point in TurnkeyFileserver anymore, esp. as Turnkeys recommendation is to install a new LXC for each Upgrade. I tried a distro upgrade the regular debian way (not recommended, but instructions are given by TurnKey-Team) and that completely failed.

So as I had to install a new LXC anyhow, your advice to go with debian and having an upgradable system for sure was great. After installing webmin there is not much of a difference compared to TKFS. You could even go for an alternative Frontend such as Cockpit.

A) Install new LXC with webmin as suggested in #13 (Features: nfs, cifs, nesting). Install Samba and NFS Server.
B) Edit nano /etc/samba/smb.conf (By the way, as a MacOS User I included all that vfs_fruit adaptions)
C) Install vfs_fruit
D) NFS: nano /etc/exports as in old system
E) Copy crontab settings (for rsync)
F) in Webmin I had to add the Users that accessed shares before
G) In Proxmox I had to "move" the old LXC-Data-Disk to the new container. Proxmox GUI, Old LXC, "Ressources", MP for Data, "Reassign Owner" and that attached the Data Volume to the new LXC

I hope I didn't miss anything relevant, but for now it seems to be ok.
 
@lxr : Thank you very much, that helped a lot.

I don't see the point in TurnkeyFileserver anymore, esp. as Turnkeys recommendation is to install a new LXC for each Upgrade. I tried a distro upgrade the regular debian way (not recommended, but instructions are given by TurnKey-Team) and that completely failed.

So as I had to install a new LXC anyhow, your advice to go with debian and having an upgradable system for sure was great. After installing webmin there is not much of a difference compared to TKFS. You could even go for an alternative Frontend such as Cockpit.

A) Install new LXC with webmin as suggested in #13 (Features: nfs, cifs, nesting). Install Samba and NFS Server.
B) Edit nano /etc/samba/smb.conf (By the way, as a MacOS User I included all that vfs_fruit adaptions)
C) Install vfs_fruit
D) NFS: nano /etc/exports as in old system
E) Copy crontab settings (for rsync)
F) in Webmin I had to add the Users that accessed shares before
G) In Proxmox I had to "move" the old LXC-Data-Disk to the new container. Proxmox GUI, Old LXC, "Ressources", MP for Data, "Reassign Owner" and that attached the Data Volume to the new LXC

I hope I didn't miss anything relevant, but for now it seems to be ok.
I used cockpit a long time, but they removed features, so I am back with webmin
 
  • Like
Reactions: Wod