Mount no longer works in Proxmox 6 - NFS - Synology

I tried it on my end and the server:port was written to my test file. Try to umount the nfs storage and run pvesm list (possibly multiple times) to get the storage mounted again.
 
I tried it on my end and the server:port was written to my test file. Try to umount the nfs storage and run pvesm list (possibly multiple times) to get the storage mounted again.
Ok, so the server restart helped for some reason.
Here's a new riddle for you:
  • when mounting the local NAS, there is output to the text file
  • when mounting the offsite NAS, there is no output at all (seems that the function is not reached during execution).
Where can it get blocked on the way?
 
  • when mounting the offsite NAS, there is no output at all (seems that the function is not reached during execution).
Try to print some extra output into the file to verify. The function should be passed.
 
Hm. Odd, as the nfs_mount method is the one that is called for the mount.

The Plugins are loaded through Storage.pm, that calls Plugin.pm and in turn that executes the method in the plugin.
Code:
/usr/share/perl5/PVE/Storage/Plugin.pm
/usr/share/perl5/PVE/Storage.pm
This will leave you with more debugging. :/ A workaround would be to create a directory storage and set the --is_mountpoint . The mount can then be done through fstab and PVE only checks if it is mounted.
https://pve.proxmox.com/pve-docs/pvesm.1.html
 
Hm. Odd, as the nfs_mount method is the one that is called for the mount.

The Plugins are loaded through Storage.pm, that calls Plugin.pm and in turn that executes the method in the plugin.
Code:
/usr/share/perl5/PVE/Storage/Plugin.pm
/usr/share/perl5/PVE/Storage.pm
This will leave you with more debugging. :/ A workaround would be to create a directory storage and set the --is_mountpoint . The mount can then be done through fstab and PVE only checks if it is mounted.
https://pve.proxmox.com/pve-docs/pvesm.1.html
i try also to connect nfs with proxmox 6.1.7 with no lucky, with proxmox 5.4.13 everything okay....any ideas?
 
i try also to connect nfs with proxmox 6.1.7 with no lucky, with proxmox 5.4.13 everything okay....any ideas?
Some as for the others. Read through the thread and try the suggestions. If nothing helps, then explain in detail your setup and start debugging. ;)
 
@Alwin
After some long interactions with the ISP and debugging on my end, here are the findings:
  1. NFS fails when check_connection function runs
  2. for some reason it generates the following error command '/sbin/showmount --no-headers --exports XX.XX.XX.XX' failed: got timeout
  3. it works perfectly fine on Proxmox 5
  4. i've tried to run /sbin/showmount --no-headers --exports XX.XX.XX.XX directly in the command line and it times out on Proxmox 6, but works fine on Proxmox 5 (same behavior as when adding storage via pvesm add nfs)
  5. the showmount version in both cases is 1.3.3
  6. i've cheated and removed the failure condition from check_connection. It allowed me to add the storage to Proxmox 6. It shows with a question-mark (probably status function fails as well), but the backup works fine.
Question of the day is - why does showmount fail?

Can there be additional networking settings in Proxmox 6 compared to Proxmox 5? Again - talking about the fresh installation without any configs, settings and apps installed atop.

Update 1: Forgot to mention that rpcinfo -p XX.XX.XX.XX fails the same way as showmount.

Update 2: I seem to be able to telnet both 111 and 2049 ports and i can ssh into the NAS from Proxmox 6.
 
Last edited:
@Alwin
After some long interactions with the ISP and debugging on my end, here are the findings:
  1. NFS fails when check_connection function runs
  2. for some reason it generates the following error command '/sbin/showmount --no-headers --exports XX.XX.XX.XX' failed: got timeout
  3. it works perfectly fine on Proxmox 5
  4. i've tried to run /sbin/showmount --no-headers --exports XX.XX.XX.XX directly in the command line and it times out on Proxmox 6, but works fine on Proxmox 5 (same behavior as when adding storage via pvesm add nfs)
  5. the showmount version in both cases is 1.3.3
  6. i've cheated and removed the failure condition from check_connection. It allowed me to add the storage to Proxmox 6. It shows with a question-mark (probably status function fails as well), but the backup works fine.
Question of the day is - why does showmount fail?

Can there be additional networking settings in Proxmox 6 compared to Proxmox 5? Again - talking about the fresh installation without any configs, settings and apps installed atop.

Update 1: Forgot to mention that rpcinfo -p XX.XX.XX.XX fails the same way as showmount.

Update 2: I seem to be able to telnet both 111 and 2049 ports and i can ssh into the NAS from Proxmox 6.
Hey!

Any ideas?
Would somebody help me with temporary access to Proxmox 6 to cross out the networking issues?
 
@Vladimir Bulgaru, I didn't get around to look at it yet. But showmount uses rpc as well and since the rpcinfo fails (as with the other NFS issues in this thread), it well be an issue with the NFS implementation in the kernel.
 
@Vladimir Bulgaru, I didn't get around to look at it yet. But showmount uses rpc as well and since the rpcinfo fails (as with the other NFS issues in this thread), it well be an issue with the NFS implementation in the kernel.
Now i am really confused, which is a good and a bad sign at once :D

Since you've mentioned that it may be a kernel issue, i was wondering what will the container behaviour be. I've created an Ubuntu 18.04 container and first deployed it on Proxmox 5. sudo rpcinfo -p xx.xx.xx.xx worked fine. I've exported this very same container to Proxmox 6 and to my huge surprise sudo rpcinfo -p xx.xx.xx.xx worked fine.

I don't claim to be an expert on virtualisation, but AFAIK the containers use the same kernel as the hypervisor. Hence the kernel works just fine. Given that the container and the hypervisor have the exactly same environment when it comes to:
  1. firewall and networking
  2. kernel
  3. rpcbind version
i really struggle to understand what else can fail :)
 
Hm... good question. Container run in their own cgroup and namespace. And it matters if they are privileged or unprivileged container. But what program version is the showmount / rpcinfo in Ubuntu?
 
Hm... good question. Container run in their own cgroup and namespace. And it matters if they are privileged or unprivileged container. But what program version is the showmount / rpcinfo in Ubuntu?
The container is unpriviledged.
As for the rpcbind version (the main package that allows for rpcinfo), this is for container:
Code:
rpcbind:
  Installed: 0.2.3-0.6
  Candidate: 0.2.3-0.6
  Version table:
 *** 0.2.3-0.6 500
        500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
        100 /var/lib/dpkg/status

and this is for Proxmox 6:
Code:
rpcbind:
  Installed: 1.2.5-0.3+deb10u1
  Candidate: 1.2.5-0.3+deb10u1
  Version table:
 *** 1.2.5-0.3+deb10u1 500
        500 http://ftp.debian.org/debian buster/main amd64 Packages
        100 /var/lib/dpkg/status
 
I have the same setup and the exact same issue. NFS does not work, but iSCSI for example does. Synology NAS and proxmox 6.1-8.

The NFS share worked prior to upgrade from version 5. The network has not been changed so it was the upgrade that broke something. And telneting port 111 from proxmox to NAS works on version 6 as well.

"pvesm list NFS2" for example results in "storage 'NFS2' is not online", and when diagnosing it with various commands from this thread it errors with "no route to host".
 
It would appear this interferes with destruction of machines as well. I put a machine on iSCSI storage and whenever I destroy that the task fails with "storage 'NFS2' is not online"
 
It would appear this interferes with destruction of machines as well. I put a machine on iSCSI storage and whenever I destroy that the task fails with "storage 'NFS2' is not online"
Quick question - have you tried using mount command directly in CLI? In my case it works.
The whole Proxmox 6 attachment routine fails because rpcinfo fails.
Is it identical in your case?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!