[SOLVED] SMB/CIFS mount point stale file handle

Polytox

New Member
Mar 13, 2023
4
1
3
Hi all,

i have a SMB/CIFS MP bound into proxmox and used in the vms/lxc.
In the past week the mount seems to went bad as it return stale file when used on the lxc and even in proxmox itself when i try to cp a file for example.

I unmounted by umount --lazy and removed the dir from /mnt/pve/<share> and set it up again. Error still exists. So i removed the mp from all vm and LXC and removed again from proxmox and set it up again (using cli pvesm add cifs... and webinterface).

if i check with command df i see it succesfully mounted on the path.

Any ideas will be appreciated.
Thanks.
 
Last edited:
Hi, what kind of CIFS storage is this?

You might have some luck with the noserverino option of mount.cifs [1] (also [2], unfortunately in German). Currently it's not possible to set the option for Proxmox-managed SMB/CIFS storages (though a patch is already on the mailing list [3]), so currently you'll need to add an /etc/fstab entry (add the noserverino option there) to mount the SMB/CIFS share e.g. to /mnt/cifs. Then you can add /mnt/cifs as a directory (dir) storage to Proxmox -- but then you should also set is_mountpoint 1 for that storage in /etc/pve/storage.cfg, to tell Proxmox this is an externally-managed mount point. If you need the SMB/CIFS share only inside a container, you can use a bind mount point [4] to make /mnt/cifs available to the container.

[1] https://linux.die.net/man/8/mount.cifs
[2] https://forum.proxmox.com/threads/b...vzdump-quemu-ordner.115271/page-2#post-500453
[3] https://lists.proxmox.com/pipermail/pve-devel/2023-March/055964.html
[4] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_bind_mount_points

Edit: clarified SMB/CIFS ambiguity
 
Last edited:
  • Like
Reactions: Polytox
Hi fweber,

proxmox listed SMB/CIFS as the same option. I was talking about a SMB3. So my mistake. Will edit in the heading. Does the fact we are talking about SMB change your hint for the fstab? Looks like quite ugly workaround for me. But as long as this will be working for all lxc in central place it would be fine. Even if i still dont understand why it went bad after months of running.

Using bind mounts is what i did. I saw the error on one of my lxc and after a while i figured out the mount is already broken in proxmox itself.

mount | grep RAID
already shows the noserverino when mounted coming from the interface. So i guess 1 and 2 did not work out.

/mnt/pve/RAID type cifs (rw,relatime,vers=3.1.1,cache=strict,username=MYUSER,uid=0,noforceuid,gid=0,noforcegid,addr=NAS.IP,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,bsize=1048576,echo_interval=60,actimeo=1)
 
No problem, I don't think the SMB/CIFS distinction matters for this problem. However, it might be interesting to know the SMB server software, e.g. is it a NAS, or some Windows server?

The problem discussed in the linked thread was that the server started sending bogus inode values. As you mentioned "stale file" errors, you might experience a similar problem. The noserverino option tells the client to ignore server-provided inode values, which might help.

/mnt/pve/RAID type cifs (rw,relatime,vers=3.1.1,cache=strict,username=MYUSER,uid=0,noforceuid,gid=0,noforcegid,addr=NAS.IP,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,bsize=1048576,echo_interval=60,actimeo=1)
Is this the mounted SMB share? If yes, note that it says serverino (the default) instead of noserverino (the option you could try to set).
 
Yes it's the mounted SMB share. The SMB is coming from a Fritz.box.

Looks like i misread and read what i was expecting for the mount... I will mount the cifs as mentioned in the fstab and try to add it as a dir like you recommended.
 
For future readers: since Proxmox VE 8, it's possible to specify the mount options for a cifs type storage with pvesm set <storage ID> --options <mount options>.
 
  • Like
Reactions: esdata and fabian
For future readers: since Proxmox VE 8, it's possible to specify the mount options for a cifs type storage with pvesm set <storage ID> --options <mount options>.

I have a Truenas VM under Proxmox with the SATA controller passed through to Truenas. Proxmox boots off an NVME drive, and Truenas then provides a ZFS pool. This is again shared with CIFS/SMB and the idea is to mount as necessary on VM/CT's. Due to the well known problems with shares in CT's, for now I mount the shares from the host, and bind mount that to the CT.

It seems to me there should be a better way of doing this, possible using pvesm, but I can't wrap my head around how. Could you perhaps throw a couple of examples in?
 
I have a Truenas VM under Proxmox with the SATA controller passed through to Truenas. Proxmox boots off an NVME drive, and Truenas then provides a ZFS pool. This is again shared with CIFS/SMB and the idea is to mount as necessary on VM/CT's. Due to the well known problems with shares in CT's, for now I mount the shares from the host, and bind mount that to the CT.

It seems to me there should be a better way of doing this, possible using pvesm, but I can't wrap my head around how. Could you perhaps throw a couple of examples in?
pvesm is for the storage configuration on the Proxmox VE host. It can't help you with mounting shares inside guests. If you really need it, bind mounts are the recommended way. And in VMs, you mount it like you usually do for the OS in question.

Do you really need Truenas VM? Why not have the Proxmox host provide the export itself?

There is an upcoming feature for virtiofs for VMs, which is more similar to bind mounts: https://lists.proxmox.com/pipermail/pve-devel/2023-November/059865.html

What is the use case for needing the same shared mount in all guests? Most people using virtualization want to isolate things ;)
 
Hi Fiona,

All good questions, but they're all interlinked, let me try to explain.

This setup is for my homelab. I use it for a couple of standard home-labbing things such as for example:
  • SMB fileshare for PC's in the household
  • Docker host running a number of apps, Emby, *arr, Transmission, Pihole, Vaultwarden, Linkwarden, Joplin server, Borg backup server, and more.
The containers use different shares fom the SMB fileshare, but all run on the same CT. I know CT are not "recommended" for separation, but separation is not my primary concern here.

I don't actually need a lot of virtual machines for this. Previously I was running all of this off an old laptop with a couple of large USB connected drives. The excercise was to convert this to a setup with RAID and perhaps get a nice GUI. For this I thought I'd use something like Unraid or Truenas Scale. I tried Unraid, but it really wasn't for me, so I tried Truenas. This kinda covered my requirements, but Truenas Scale has the quirk that it requires the full access to the full boot drive. My boot drive is a 2 TB NVME so thowing 99.9% of the space away seemed pointless, which is where Proxmox comes in. WIth proxmox I create a VM with Truenas Scale, pass the spinning rust through to it and let it handle the ZFS raid array and the SMB shares. I have a CT with my docker containers, with shares mounted from the ZFS array. Eventually I would work towards running all my docker containers under Truenas control but I will make that transition down the road as there are a lot of small fiddly changes necessary.

Here we get to the problem, the shares can't be mounted directly on a CT so I mount them back on the Proxmox server and then bind mount them to the CT. Yes I know it is cludgy, and I guess thats why I'm trying to find a better solution.

If anything I could install Truenas directly on the iron. I have an old 500 GB drive lying around that I could use as boot drive for Truenas, it just seems such a waste to have a drive spinning only for that. In the old days with Truenas Core (And unraid) you could boot of USB but this isn't supported by Truenas Scale.

I realize this all has nothing to do with Proxmox, but you asked about my usecase. :) So I guess the question here is, what is the best way to mount the shares from Truenas to the CT. Of course, if you or anyone have good ideas how to set this up in a better way I'm ready to learn!

PS. Virtiofs sounds interesting, but it's not clear to me if it exports the raw ZFS dataset, or the clienta needs to create a new local filesystem layer on top such as with virtual machines now.
 
Last edited:
Here we get to the problem, the shares can't be mounted directly on a CT so I mount them back on the Proxmox server and then bind mount them to the CT. Yes I know it is cludgy, and I guess thats why I'm trying to find a better solution.
I'm not sure there is a better solution for making the shares available. AFAIK, either you need a privileged container with the SMB/CIFS feature enabled, or do it via bind mounts.
 
For future readers: since Proxmox VE 8, it's possible to specify the mount options for a cifs type storage with pvesm set <storage ID> --options <mount options>.
Great service-post, thank you!

I noticed: If I add this, I'm able to see the new line 'options noserverino' in /ect/pve/storage.cfg - looking good.
When I disable, unmount, re-enable the smb storage and then use:
Bash:
cat /proc/mounts | grep '//'
(...,serverino,...) is gone now from the options. But 'noserverino' didn't show up instead. Is it set and active, even without showing up in the 'mount' commands output? :oops: ... I'm asking because 'man mount.cifs' says 'serverino' is the default behavior when no option is set. (?)
 
Hi,
I noticed: If I add this, I'm able to see the new line 'options noserverino' in /ect/pve/storage.cfg - looking good.
When I disable, unmount, re-enable the smb storage and then use:
Bash:
cat /proc/mounts | grep '//'
(...,serverino,...) is gone now from the options. But 'noserverino' didn't show up instead. Is it set and active, even without showing up in the 'mount' commands output? :oops: ... I'm asking because 'man mount.cifs' says 'serverino' is the default behavior when no option is set. (?)
well good question. From a quick search, cifs_show_options will display only the positive serverino option if it's set : https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/smb/client/cifsfs.c#n634

So when using the default, serverino will show up. If it's disabled, nothing will show up.
 
  • Like
Reactions: esdata
Hi,

well good question. From a quick search, cifs_show_options will display only the positive serverino option if it's set : https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/smb/client/cifsfs.c#n634

So when using the default, serverino will show up. If it's disabled, nothing will show up.
Thank you for clearing that up for me in such detail. I learned something in general from it. Fingers crossed, I'll be able to answer such questions myself in the not too distant future. :) Very helpful - again.
 
Hi Fiona,

All good questions, but they're all interlinked, let me try to explain.

This setup is for my homelab. I use it for a couple of standard home-labbing things such as for example:
  • SMB fileshare for PC's in the household
  • Docker host running a number of apps, Emby, *arr, Transmission, Pihole, Vaultwarden, Linkwarden, Joplin server, Borg backup server, and more.
The containers use different shares fom the SMB fileshare, but all run on the same CT. I know CT are not "recommended" for separation, but separation is not my primary concern here.

I don't actually need a lot of virtual machines for this. Previously I was running all of this off an old laptop with a couple of large USB connected drives. The excercise was to convert this to a setup with RAID and perhaps get a nice GUI. For this I thought I'd use something like Unraid or Truenas Scale. I tried Unraid, but it really wasn't for me, so I tried Truenas. This kinda covered my requirements, but Truenas Scale has the quirk that it requires the full access to the full boot drive. My boot drive is a 2 TB NVME so thowing 99.9% of the space away seemed pointless, which is where Proxmox comes in. WIth proxmox I create a VM with Truenas Scale, pass the spinning rust through to it and let it handle the ZFS raid array and the SMB shares. I have a CT with my docker containers, with shares mounted from the ZFS array. Eventually I would work towards running all my docker containers under Truenas control but I will make that transition down the road as there are a lot of small fiddly changes necessary.

Here we get to the problem, the shares can't be mounted directly on a CT so I mount them back on the Proxmox server and then bind mount them to the CT. Yes I know it is cludgy, and I guess thats why I'm trying to find a better solution.

If anything I could install Truenas directly on the iron. I have an old 500 GB drive lying around that I could use as boot drive for Truenas, it just seems such a waste to have a drive spinning only for that. In the old days with Truenas Core (And unraid) you could boot of USB but this isn't supported by Truenas Scale.

I realize this all has nothing to do with Proxmox, but you asked about my usecase. :) So I guess the question here is, what is the best way to mount the shares from Truenas to the CT. Of course, if you or anyone have good ideas how to set this up in a better way I'm ready to learn!

PS. Virtiofs sounds interesting, but it's not clear to me if it exports the raw ZFS dataset, or the clienta needs to create a new local filesystem layer on top such as with virtual machines now.

I have a very similar setup. Proxmox --> VM with disc controller; VM hosts NFS server --> Proxmox NFS client mounts shares --> bind-mount shares into containers.

My issue is if the NFS server VM needs to reboot, everything breaks. Even if Proxmox host still recognizes the rebooted share (no stale filehandle), the containers lose access.

The only solution I have now is to either reboot PVE, or remount (as needed) the NFS share in PVE and then reboot all of the containers with bind-mounts.

Surely there is a better way.
 
Hi,
Surely there is a better way.
I'm not sure. At least not with your current design. Why not set up the NFS export on the host directly?
 
Because I want my file server to be separated from my VM/CT host, which seems to be the recommended way of doing things.
 
Because I want my file server to be separated from my VM/CT host, which seems to be the recommended way of doing things.
Okay, yes. Better isolation is a good argument. But I'm not aware of any mechanism to avoid the unmount+remount of NFS when the server needs to be restarted.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!