Remove all traces of old NFS connections

ferret

Member
Dec 6, 2020
21
2
8
23
Hi,

In the syslogs I am seeing continuous entries on the main node only in a Cluster attempting to connect to NFS servers that do not exist anymore, I removed these NFS shares months ago. Below is a sample of the syslog.

Can anyone advised were to look to remove all old traces of NFS client connection to NFS servers?

Any assistance would be very much appreciated.

Cheers

Mar 28 00:01:30 pve1 kernel: nfs: server 172.16.20.245 not responding, timed out
Mar 28 00:01:32 pve1 kernel: nfs: server 172.16.10.25 not responding, timed out
Mar 28 00:01:36 pve1 kernel: nfs: server 172.16.20.240 not responding, timed out
Mar 28 00:01:42 pve1 kernel: nfs: server 172.16.10.25 not responding, timed out
Mar 28 00:01:48 pve1 kernel: nfs: server 172.16.20.240 not responding, timed out
Mar 28 00:01:53 pve1 kernel: nfs: server 172.16.10.25 not responding, timed out
Mar 28 00:01:53 pve1 kernel: nfs: server 172.16.20.240 not responding, timed out
Mar 28 00:01:55 pve1 kernel: nfs: server 172.16.20.245 not responding, timed out
Mar 28 00:01:58 pve1 kernel: nfs: server 172.16.20.245 not responding, timed out
Mar 28 00:01:59 pve1 kernel: nfs: server 172.16.10.25 not responding, timed out
 
Hi,

Have you checked your storage or the /etc/pve/storage.cfg file if the NFS storage still exist?
 
Hi Moayad,

Yes, please refer to below;

root@pve1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,backup,iso
shared 0

lvmthin: local-lvm
thinpool data
vgname pve
content images

lvm: san1
vgname san-lvm
content rootdir,images
shared 1

pbs: sr-pbs
datastore store1
server 172.16.20.150
content backup
fingerprint ee:5e:7e:66:0e:ee:aa:de:.................f8:60:aa:d1:ea:64:69:4b:b5
prune-backups keep-all=1
username root@pam
 
Thank you for the output.

are you see the Storage name in mount or df -h commands? can you try using umount -f command (see man umount -f flag for more info)
 
  • Like
Reactions: ferret
Hi Moayad,

Many thanks, your "mount" suggestion identified the issue and "umount" resolved the issue.

Which then enabled me to resolve another issue as live syslogs were more stable.

Cheers
 
Hi,

Glad you have solved your issue!


You can make your thread as [SOLVED] to help other people who have the same issue Thanks!

have a nice day :)
 
  • Like
Reactions: ferret
Hi Moayad,

Unfortunately, the solving issue has now created a new major issue with my cluster.

I can only create and/or clone new VM's on the node that I applied your suggested fix to, I now can only live migrate existing VM's created prior to applying your suggested fix.

Even if I create VM on the node which has the suggested fix applied to, I can migrate newly created VM to another node but can not start it.

Below are some of the errors messages;

Mar 31 08:51:40 pve3 pvedaemon[6028]: <root@pam> starting task UPID:pve3:00006036:01E2825C:60639D6C:qmstart:128:root@pam:
Mar 31 08:51:40 pve3 pvedaemon[24630]: start VM 128: UPID:pve3:00006036:01E2825C:60639D6C:qmstart:128:root@pam:
Mar 31 08:51:40 pve3 pvedaemon[24630]: can't activate LV '/dev/san-lvm/vm-128-disk-0': device-mapper: create ioctl on san--lvm-vm--128--disk--0 LVM-dpbLoe9wtM5JwWbVwhNhMfoHZH1qcTI8NlKW3j6c7keJz3wGD0lf5dSV2zo8klQq failed: Device or resource busy
Mar 31 08:51:40 pve3 pvedaemon[6028]: <root@pam> end task UPID:pve3:00006036:01E2825C:60639D6C:qmstart:128:root@pam: can't activate LV '/dev/san-lvm/vm-128-disk-0': device-mapper: create ioctl on san--lvm-vm--128--disk--0 LVM-dpbLoe9wtM5JwWbVwhNhMfoHZH1qcTI8NlKW3j6c7keJz3wGD0lf5dSV2zo8klQq failed: Device or resource busy

create full clone of drive efidisk0 (san1:vm-147-disk-1)
Rounding up size to full physical extent 4.00 MiB
device-mapper: create ioctl on san--lvm-vm--148--disk--0 LVM-dpbLoe9wtM5JwWbVwhNhMfoHZH1qcTI8HcTzCY0bLodexG6tlyImomnx8HXUOtoA failed: Device or resource busy
TASK ERROR: clone failed: error during cfs-locked 'storage-san1' operation: lvcreate 'san-lvm/vm-148-disk-0' error: Failed to activate new LV san-lvm/vm-148-disk-0.

I would appreciate any assistance or suggestion on how to now resolve the new issue.

Cheers
 
Hi Moayad,

Unfortunately, the solving issue has now created a new major issue with my cluster.

I can only create and/or clone new VM's on the node that I applied your suggested fix to, I now can only live migrate existing VM's created prior to applying your suggested fix.

Even if I create VM on the node which has the suggested fix applied to, I can migrate newly created VM to another node but can not start it.

Below are some of the errors messages;

Mar 31 08:51:40 pve3 pvedaemon[6028]: <root@pam> starting task UPID:pve3:00006036:01E2825C:60639D6C:qmstart:128:root@pam:
Mar 31 08:51:40 pve3 pvedaemon[24630]: start VM 128: UPID:pve3:00006036:01E2825C:60639D6C:qmstart:128:root@pam:
Mar 31 08:51:40 pve3 pvedaemon[24630]: can't activate LV '/dev/san-lvm/vm-128-disk-0': device-mapper: create ioctl on san--lvm-vm--128--disk--0 LVM-dpbLoe9wtM5JwWbVwhNhMfoHZH1qcTI8NlKW3j6c7keJz3wGD0lf5dSV2zo8klQq failed: Device or resource busy
Mar 31 08:51:40 pve3 pvedaemon[6028]: <root@pam> end task UPID:pve3:00006036:01E2825C:60639D6C:qmstart:128:root@pam: can't activate LV '/dev/san-lvm/vm-128-disk-0': device-mapper: create ioctl on san--lvm-vm--128--disk--0 LVM-dpbLoe9wtM5JwWbVwhNhMfoHZH1qcTI8NlKW3j6c7keJz3wGD0lf5dSV2zo8klQq failed: Device or resource busy

create full clone of drive efidisk0 (san1:vm-147-disk-1)
Rounding up size to full physical extent 4.00 MiB
device-mapper: create ioctl on san--lvm-vm--148--disk--0 LVM-dpbLoe9wtM5JwWbVwhNhMfoHZH1qcTI8HcTzCY0bLodexG6tlyImomnx8HXUOtoA failed: Device or resource busy
TASK ERROR: clone failed: error during cfs-locked 'storage-san1' operation: lvcreate 'san-lvm/vm-148-disk-0' error: Failed to activate new LV san-lvm/vm-148-disk-0.

I would appreciate any assistance or suggestion on how to now resolve the new issue.

Cheers
Further testing we have created a new VM and are unable to clone it although migrating is not an issue with newly created VM
 
Do you have Multipath in your cluster/node?

have you rebooted the node after the umounting?

Also please post the task log for clone the VM between [CODE][/CODE]
 
Hi Moayad,

Yes, multipath is enabled and am able to migrate existing VM just not newly clone one's from templates anymore.

No, I have not rebooted since umount the NFS shares, should I? If so, do I need to reboot all 5 nodes?

Attached are snapshots of the task log

Cheers
 

Attachments

  • Screen Shot 2021-03-31 at 6.40.48 pm.png
    Screen Shot 2021-03-31 at 6.40.48 pm.png
    41 KB · Views: 20
  • Screen Shot 2021-03-31 at 6.39.29 pm.png
    Screen Shot 2021-03-31 at 6.39.29 pm.png
    39.9 KB · Views: 17
  • Screen Shot 2021-03-31 at 6.39.21 pm.png
    Screen Shot 2021-03-31 at 6.39.21 pm.png
    49.3 KB · Views: 17

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!