Default UID and GID when created via SMB

promoxer

Member
Apr 21, 2023
214
21
18
Code:
# /etc/samba/smb.conf
[workspace]
comment = Workspace
path = /vpool/workspace/
browsable = yes
create mask = 0666
directory mask = 0775
writable = yes
guest ok = yes
# created files/folders will be owned by root
force user = root

1. Above is my current SMB config where some files are shared with my Windows VM via SMB
2. When I add files from my Windows, they get owned by PVE's root
3. How do I make these folders and files be owned by uid 101234 gid 101234? Filesystem is ZFS


4. If I change the config to be `force user = 101234` and `force group = 101234`, Windows can no longer access the shared folder
 
Last edited:
Are you sure you want SMB on the PVE Host?

Running SMB directly on a PVE Host generally isn't recommended because it's often easier to use specific turnkey appliances in a VM - or more importantly, PVE Hosts just aren't intended to be customized much beyond what the GUI offers - so much so that there's not even a backup process for them. It's assumed that your VMs contain your configured "solutions" and the PVE Host is just config for disks and RAM and such to support those.

In general, it's recommended to run an SMB server on a VM on the same network as the VMs you want to share with - and even the host (or with mulitple virtual NICs if you want to share to different networks - host, guest, lan, iot, etc).

As such this is probably more of a question to pose to a forum for Debian (Proxmox 8 is Debian 12 Bookworm), or an SMB "appliance" (TrueNAS / FreeNAS / Unraid).

What's the use-case or goal?

I know it might be annoying for me to NOT give a direct answer to your question and suggest something different that you didn't ask.

However, I also don't have actually enough information about how you installed SMB or what you're trying to accomplish to give advice yet - even if you were doing this on a Debian VM. So...

What's the use-case or high-level goal you're trying to achieve? And how is that related to changing the IDs? Do you know anything about the SMB systemd unit file config?

Personal Note

If you do look into TrueNAS or FreeNAS, you'd install that with UFS (since you're already using ZFS on host), or with ZFS on drive passthrough (which is how I was taught to do it).

I've struggled with what I thought should be pretty simple SMB configs in the past, and personally, the overhead of TrueNAS is worth it to me - especially since there's so much documentation and community support. I still have to re-run ACLs for reasons that I don't yet understand (but not frequently enough to fix the root cause - probably something with incorrect inherited permissions), but once set up, it works.
 
Last edited:
1. SMB installed in PVE using `apt` because it is closest to the hardware, I want to reduce performance penalties
2. + Access everywhere, it does not make sense to mount a PVE VM's folder in PVE
3. What do you need from the systemd unit file? The service is running for certain
4. 101234 so that my PVE LXCs can also read/write to them
 
Last edited:
1. LXC is really good for that. Regardless, understood.
2. I don't quite understand.

What do you mean by "+ Access everywhere"? This might sound stupid but I'm not sure if "+ Access" is an idiom I don't understand or if you meant the same as "and access everywhere". Maybe it doesn't matter

As for the second part, I don't think it's uncommon to share from guest to the host. I do that for NFS. I've heard of people running PBS virtualized in production (that's probably more niche). I think SMB shares are fairly common.

I think it's mostly for the ease of backup of config and restore, or HA within a cluster using ZFS Replication.

3. This is a can of worms...

I believe that the default for the SMB package is to run as a specific SMB user, not root. Even when it runs at root, however, systemd is essentially a containerization system - similar to LXC, but with many more permissions enabled by default. The `root` you get in systemd is the same UID 0 as the Host - not mapped like LXC - but it is jailed like LXC.

You could make either one behave like the other with enough changes to the config file - except that systemd doesn't run processes on their own block device... though I wouldn't be surprised at all if it could.

4. I may not be familiar enough with low-level LXC configuration or getting my wires crossed. Is that a custom UID mapping you chose for the SMB user to map to the LXC root, or something similar to that? Or that's already working so its a non issue?

With the information I have right now, my guess is that Windows can't access the folder because the SMB process can't access the folder because of systemd's permission system. I could be off on that.
 
2. Access everywhere means from PVE, Windows/Linux VMs, LXC, docker in LXC, Android, MacOS devices
3. I don't really want to change the systemd file unless it is the only way, it makes documentation and backups (yes, i do have an automated backup system) very difficult, smbd is running root checked with `ps uaxf`
4. 101234 is a docker UID and GID
 
2. I honestly believe you will have a simpler and better experience running out of a VM and granting access to those other places. I do not believe that performance would be an issue for you. I obviously don't know your use case or performance requirements, but... we're talking 1-3% points.

3. I agree. When I need to go off the beaten path, I use Alpine / OpenRC.

4. Proxmox is an LXC frontend. Docker is an LXC frontend. It's definitely more typical to use the LXC management that's built into Proxmox rather than Docker. You can use Docker to build the tarball and then run them with Proxmox (just put them in /var/lib/vz). But the typical way to run Docker on Proxmox would be the same way Docker runs itself on Mac and Windows: in a VM, where it has control of the kernel and all that.

I'm a deep learner. I've been using Linux for 20 years. I've been using Proxmox for the last few years, and I've gone through the Proxmox Advanced training. I can say you're doing some creative and interesting stuff.

Now, there could be a very simple fix that someone familiar with SMB on Debian would know, however, in my estimation your breaking the guardrails of how Proxmox is designed and what it's use cases are.

Proxmox is, in many ways, analogous to Docker. Imagine talking to some Docker folk and saying "Hey, I'm on macOS, how to I install SMB on the Host Linux VM of Docker so that I can give access to my Mac and all of the containers." They might say something similar, right? The Host Linux system is for running the containers, it's not intended to install SMB on it - you run a container that runs SMB and share that with the other containers and allow macOS access to it.

Linux is Linux. Proxmox can do everything Linux can. Docker can do everything Linux can (some nuance there, but you get the point). What you want to do can be done.

I certainly don't begrudge you for trying to figure out novel and interesting ways to do what you're doing - we need that creative spirit, or society doesn't progress... but... well, it'll be interesting to see if someone here is able to help you with that and if you figure it out in a Debian forum, I'd love to know the solution. It's probably something simple, just outside of my wheelhouse.
 
Last edited: