Mount no longer works in Proxmox 6 - NFS - Synology

@Alwin
This is the commend for trying to add the NAS via Proxmox:
Code:
pvesm add nfs storage --path /mnt/pve/storage --server [ADDRESS] --export [PATH] --content snippets,backup,rootdir,vztmpl,iso,images --maxfiles 4
This is the commend for trying to add the NAS via mount:
Code:
mount -t nfs [ADDRESS]:[PATH] /mnt/pve/storage
 
What does mount show? There you should see all mounted filesystems and they should have alle the options in brackets.
 
Code:
rw,relatime,vers=3,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=[IP_MASKED],mountvers=3,mountport=46186,mountproto=udp,local_lock=none,addr=[IP_MASKED]
 
Is the server & client IP, ipv4 or v6?
 
It's ipv4 - both of them.

IMO the best vector to tackle the issue is to actually see how Proxmox 6 differs from Proxmox 5 in this regard. It works perfectly on Proxmox 5.
 
IMO the best vector to tackle the issue is to actually see how Proxmox 6 differs from Proxmox 5 in this regard. It works perfectly on Proxmox 5.
The biggest factor is Kernel 4.15 -> 5.3 (or 5.0). As we use a Ubuntu based Kernel with our patches on top, it would be a great help, when someone would test the different Ubuntu Kernel versions. To get the smallest possible delta on Kernel versions. Then it may be possible to narrow it down to a small set of changes. This should bring us closer to a possible solution.
https://kernel.ubuntu.com/~kernel-ppa/mainline/
 
@Alwin
Thing is - mount works on Ubuntu 19.10.
If you need other kernels tested - let me know and i'll gladly help.
How does pvesm add nfs differ?
It seems that the base functionality works well, but Proxmox implementation stumbles for some reason.
 
Perl:
sub nfs_mount {                                                                                                                                                                                                                                                                
    my ($server, $export, $mountpoint, $options) = @_;                                                                                                                                                                                                                         
                                                                                                                                                                                                                                                                               
    $server = "[$server]" if Net::IP::ip_is_ipv6($server);                                                                                                                                                                                                                     
    my $source = "$server:$export";                                                                                                                                                                                                                                            
                                                                                                                                                                                                                                                                               
    my $cmd = ['/bin/mount', '-t', 'nfs', $source, $mountpoint];                                                                                                                                                                                                               
    if ($options) {                                                                                                                                                                                                                                                            
        push @$cmd, '-o', $options;                                                                                                                                                                                                                                            
    }                                                                                                                                                                                                                                                                        
                                                                                                                                                                                                                                                                               
    run_command($cmd, errmsg => "mount error");                                                                                                                                                                                                                                
}
The actual mount command is put together in this method. There doesn't seem to be an difference to the command you provided. But you can try to add options to the storage.cfg file (eg. options vers=3).

EDIT: the method didn't change since 2015.
 
happy NewYear 2020 ;-)
we have Synology RS818RP+ with DSM 6.2.2-24922 Update 4
and one shared folder is exported to proxmox nodes
with the following NFS permissions (at Synology webGUI)
privilege R/W
squash Map root to admin
Asynchronous yes
Non-privileged port Allowed
Cross-mount Allowed
it is mounted to proxmox nodes via standard /etc/fstab for several months without problems

root@mox11:~# grep nas3 /etc/fstab
nas3.verdnatura.es:/volume1/backup4mox /mnt/nas3 nfs _netdev 0 2

root@mox11:~# df -hT |grep nas3
nas3.verdnatura.es:/volume1/backup4mox nfs4 15T 8.2T 6.2T 57% /mnt/nas3

just for interest i've just tested PROXMOX webGUI to mount exported filesys via NFS here
1. umount /mnt/nas3 from BOTH nodes
2. checking rpcinfo -p at BOTH nodes
3. mounting via webGUI and it works, listing files OK
4. checking pvesm status OK
5. it just has another mount point at /mnt/pve/nas3
6. we have monitoring of mounted filesys at servers via nagios and don't wanna to reconf it now > so deleted NFS storage via PROXMOX webGUI
BUT interesting event occured
storage nas3 was deleted via proxmox webGUI, pvesm status did not listed it BUT filesystem was still mounted at /mn/pve/nas3
so i had to umount it manualy at BOTH nodes
after that i mounted /mnt/nas3 and nagios is happy now ;)
hope it helps
Nada

root@mox11:~# df -hT |grep nas3
nas3.verdnatura.es:/volume1/backup4mox nfs4 15T 8.2T 6.2T 57% /mnt/pve/nas3

root@mox11:~# pvesm status
Name Type Status Total Used Available %
backup dir active 17156896 10210360 6051972 59.51%
local dir active 17156896 10210360 6051972 59.51%
local-lvm lvmthin active 10469376 0 10469376 0.00%
nas3 nfs active 15372268800 8736210816 6635939200 56.83%
san2020janpool lvmthin active 94371840 2019557 92352282 2.14%
zfs zfspool active 30220000 4187292 26032708 13.86%

root@mox11:~# pveversion -V
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-11-pve: 4.15.18-34
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-15
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-4
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 
2. checking rpcinfo -p at BOTH nodes
If this reported the info form the nfs server, then it seems to be more a combination of NFS client + server configuration issue then a bug, for installations that exhibit no output.

6. we have monitoring of mounted filesys at servers via nagios and don't wanna to reconf it now > so deleted NFS storage via PROXMOX webGUI
BUT interesting event occured
storage nas3 was deleted via proxmox webGUI, pvesm status did not listed it BUT filesystem was still mounted at /mn/pve/nas3
so i had to umount it manualy at BOTH nodes
The storage should be unmounted. Maybe there is an entry in syslog, that might tell why it wasn't.
 
@Alwin
So, just to spice things up, i went a bit further :D
I have 2 identical NAS-s - one offsite and one onsite.
  1. Both mount perfectly well via pvesm add nfs on Proxmox 5
  2. Both mount perfectly well via mount -t on Proxmox 6
  3. The offsite one doesn't mount via pvesm add nfs on Proxmox 6
  4. The onsite one mounts perfectly well via pvesm add nfs on Proxmox 6
Best guess - there is a change in the tracing algorithm between Proxmox 5 and Proxmox 6.
Hope this helps in moving the conversation forward.

Edit: Another interesting observation (probably it's obvious, but still): since the 2 NAS-s are identical, i've manually edited the storage.cfg so that settings on Proxmox 6 match those of the offsite NAS, rather than the onsite. No luck - storage status was unknown. I went and manually mounted it via mount -t, but status remained unknown, even though i could easily browse the NAS content via the cli.
 
Last edited:
  • Like
Reactions: gparrilla
By offsite, does it mean it needs to connect through firewalls?
The firewall is disabled on both ends. The offsite NAS is exposed to the web via port forwarding on the router. This is why my assumption is that it's tracing related issue and it's very strange that it works on Proxmox 5, even though the algorithm hasn't changed. The only possibility is that the tools used by the algorithm changed.
 
  1. Both mount perfectly well via mount -t on Proxmox 6
  2. The offsite one doesn't mount via pvesm add nfs on Proxmox 6
This combination is odd, as the storage plugin runs also mount -t.
 
This combination is odd, as the storage plugin runs also mount -t.
Exactly. This is what's so weird about this whole thing. For some reason the thing that's supposed to work does not.
And i'd be interested to debug the script by understanding what are the vars computed - the address, the export.
To be honest, i expect it to be stuck at the address tracing, since the pve nfsscan fails for both domain and ip.
Can you tell me where can i find that algorithm and how to patch it so that it outputs the vars?
 
It's under /usr/share/perl5/PVE/Storage/NFSPlugin.pm.

Perl:
use Data::Dumper;
print Dumper($var);
Place that after the variable has been assigned with an value. You might need to restart pvedaemon.service.
https://perldoc.perl.org/Data/Dumper.html
 
This is the function i've updated for debugging:
Perl:
sub nfs_mount {

    my ($server, $export, $mountpoint, $options) = @_;

    $server = "[$server]" if Net::IP::ip_is_ipv6($server);

    my $source = "$server:$export";

    my $filename = '/home/temp/output.txt';

    open(FH, '>', $filename) or die $!;

    print FH $source;

    close(FH);

    my $cmd = ['/bin/mount', '-t', 'nfs', $source, $mountpoint];

    if ($options) {

        push @$cmd, '-o', $options;

    }

    run_command($cmd, errmsg => "mount error");

}

No output when mount is successful nor unsuccessful.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!