trully umount stale filesystems cifs || nfs

migs

New Member
May 8, 2024
2
0
1
I encontered this problem several times with several remote filesystems and i really need help.
I had to change the password In a CIFS server which i only use for backups, the problem is the filesystem went away and now i am getting flooded in the logs and dmesg with:

[133862.953522] CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE
[133862.953530] CIFS: VFS: \\10.10.10.10 Send error in SessSetup = -13
[133863.602243] CIFS: VFS: No writable handle in writepages rc=-9
[133863.603042] CIFS: VFS: No writable handle in writepages rc=-9

even tough i re-created the mount point now i am getting flooded

How do i fix this problem without a full server restart?
I had encontered these issues in other network filesystems like nfs, and specially s3 fuse...
when there is not something right, i am forced to restart the whole server which is really awfull

i did tried to umount, forcefully umount the cifs storage and there is nothing that seems to help getting rid of the problem unless i restart the machine...
 
Last edited:
https://stackoverflow.com/questions/74626/how-do-you-force-a-cifs-connection-to-unmount

I ran into the same issue a while ago, ' umount -l ' (that's an L) should do the trick
that's what caused the biggest issue,
i did the lazy umount, the flood didn't stop and i had no way to get the mountpoint back again, even thought i recreated the mouting point.
i also tried to kill kill the processes and there was no use...

getting the proxmox server unstable and with the dmesg getting flooded because a network storage drive went away, is no fun...
and as i mentioned this happened to me in different ocasions when i was setting up storage networks...
because something trivial like changing a password, a temporary disconnect, or just initial testing to setup a new storage...
i am starting to get to the conclusion to not touch anything network storage related in the proxmox server..
 
I think this is why most ppl recommend mounting the shared storage in a container and sharing it out from there, that way a flood doesn't impact you at the host level

But yah, if it works don't mess with it
 
I had stale mounts earlier today tinkering with Ceph .. Doing a rolling reboot of all cluster nodes got things ok again. I know you said you didn't want to restart but sometimes that's just the cleanest way to handle it
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!