NFS share umount -f problem, device is busy

brucexx

Renowned Member
Mar 19, 2015
236
9
83
I removed the share from GUI and though that it is gone but tried later to mount another share with the same name but different ip address. The share was taken by GUI but was not accessible (timing out). It is a two node cluster.

Any ideas how to remove it, I already rebooted both nodes with no success. I see it being mounted still with a old share and old ip address.
 
Last edited:
Yes i know ;) I have also suffered. I don't know why, but all storages, when they deactivated or deleted, they where no unmount. I think this should be an feature request. But for the first, simply unmount it:
Code:
fusermount -uz /path/to/mountpoint
For shutdownszenario i've writen a little systemdservice, because when you've an UPS and the backupserver is going to shutdown before the PVEhost to to shutdown, then after the PVEhost will hang on unmount NFS on his shutdownprocess, for ever.
Code:
cat /etc/rc.local.shutdown

#!/bin/sh -e
echo "NFS Laufwerke werden ausgehängt"
fusermount -uz /mnt/pve/*
exit 0

chmod +x cat /etc/rc.local.shutdown

cat /etc/systemd/system/rc.local.shutdown.service

[Unit]
Description=/etc/rc.local.shutdown Compatibility
Before=shutdown.target

[Service]
ExecStart=/bin/true
ExecStop=/etc/rc.local.shutdown
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

systemctl enable rc.local.shutdown.service
 
Great share - thank you,

I was actually able to umount it, I used: umount -f -vvv <mount dir> and it got unmounted somehow I am not sure if it was my persistence with trying to force it to umount or the -vvv really did something.

Thank you
 
@fireon: Thanks for your systemd script, it's exactly what I was looking for because I didn't find a way yet to fix this issue in proxmox. I think that a bug should be filed. In any case should a nfs share that can't be reached anymore not be able to prevent the server from rebooting at all. In my case it's a virtualized storage as well as the firewall that can cause this.

j

Interestingly enough even after that timeout and sending the KILL signal to all processes the share did prevent a reboot, unfortunately I forgot to log that.

Maybe some proxmox dev can comment on this?
 
Last edited:
@Phlogi: This is how nfs is working. The default is option hard, and wait endless. It is no an issue in Proxmox. But yes i think it would be a good idea to build in something that do that for us. One way is to use such systemd script. Another way is to use build in NFS option. For example:
Code:
cat storage.cfg

nfs: backup
server backupserver.local
path /mnt/pve/backup
export /sicherung/vmbackup
content backup
options rw,sec=sys,noatime,vers=3,soft,timeo=10,retrans=5,actimeo=10,retry=5
maxfiles 4
But the best results I achieved with the systemd script. I have it on all us servers, and it works fine. I also tested the nfs option in the storage.cfg. At first time it looks likes that is working, but problems are... is the NFSserver to long busy, because to high i/o or use an Qnap ;) than an backup can be fail.

BTW: When you disable an NFSstorage it should be also unmounted.
ok.. littel bit offtopic...
 
Thanks I knew about the options but forgot to set the soft option. I will keep the script enabled and will test it.
 
@fireon:
Have the exact problem with NFS share on my QNAP. I use it for VM, LXC Backups. So I tried your systemdservice script, but I must admit I'am new to Linx scripts, help would be much appreciated :)
I created the /etc/rc.local.shutdown, made it executable, created /etc/systemd/system/rc.local.shutdown.service and enabled the service.

Is there something I'm missing or have to do in order to get this script working? I can post the exact CLI lines if this would help.
 
@fireon:
Have the exact problem with NFS share on my QNAP. I use it for VM, LXC Backups. So I tried your systemdservice script, but I must admit I'am new to Linx scripts, help would be much appreciated :)
I created the /etc/rc.local.shutdown, made it executable, created /etc/systemd/system/rc.local.shutdown.service and enabled the service.

Is there something I'm missing or have to do in order to get this script working? I can post the exact CLI lines if this would help.
No, this exactly how it works! :)
 
I have done last week some benchmarks and a lot of tests with NFS/SMB and Options. Fact is that NFS is unuseable with soft and timeoutoptions in the usecase of backups/storage. If you do some timeouts this is working with one backup but not in the real world with a lot of VMs and more clustermembers. Than you lost 50% of the backups with vzdump cron. I have really really done a lot of tests.

1. Using NFS. Look at your backupdevice that must running perfectly, if this is always reachable you never have a problem. Don't use cheap qnap. Better buy a little Server and Setup some PVE and use it as backupstorage. We do that way.

2. If it is very very important that the PVE Host may never be crash (for example once in a year) with running backups and crashed backupdevice, than the only way is to use cifs/samba. But be careful with viruses over network. So it is very important to use a seperate specialuser and IP restrictions. Or/and vlan acls over Layer3.
 
  • Like
Reactions: phoenixzerodown
@fireon
Ok thanks. I will try today after work and may post my results.
You are right, I wouldn't use QNAP in production environment either. This is just for my home lab/servers, so it is not that critical and NFS should be used only for always on storage anyway. The QNAP has power on/off scheduled and runs approx. 2h a day.
 
should n't the command to use be umount -lf for forcing unmounting the NFS share ? fusermount is for FUSE mount points according to man
 
  • Like
Reactions: phoenixzerodown
Has anything changed in version 5.1? Script from Fireon doesn't work anymore. When I try a manual umount -lf /mnt/pve/* has no effect, as I am still able to access the NFS shares? I recently stumbled over this because I attached a UPS to the server and configured NUT to shutdown the server in case of the UPS running on low battery level, but hangs when trying to detach NFS Shares that are no longer online (VM on the host or remote NFS Shares).
Anyone any ideas?
 
Thanks fireon for takinkg a look. I just want to add, that it doesn't work either if I enter the full path, e.g. /mnt/pve/Templates via ssh. I still can browse all my ISOs or LXC Templates on my NFS Server. Is there something that locks the mount entries?
 
Tested on 3 of us servers. This is working fine.
That it stays on unmount you must disable the pending storageentry first. This is what PVE do on shutdown.
Example:
  1. Disable NFSmount on storagetab
  2. fusermount -u -z /mnt/pve/yourmount
 
Ok, this explains why it keeps the NFS mounting. Will try this later after work. Is there a way to disable the NFSMount via CLI? If so, I could make a temp. workaround for my UPS Script to first, disable NFSmount, then unmount NFSshare and in the end shutdown the server.
 
If you to an shutdown of the server, an disable of the nfsshare is not necessary. This is only for testing at normal server runtime. But if you need:
Code:
pvesm set yournfsharename_inwebgui --disable 1
Have a look in your local serverdocumentation. This is very good explained.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!