Mount no longer works in Proxmox 6 - NFS - Synology

What network card are you using?
All nodes have differed, from motherboard to external cards.

And could you try another Kernel?
Not 100% how to do this (never done it before, never needed it)
Download code -> compile -> install ?

You linked to the ubuntu repo, however those packages can't be installed as they expect a ubuntu kernel
 
All nodes have differed, from motherboard to external cards.
Yes, I thought so. I just want to rule out any coincidence - using the same NIC model. A difference is that PVE 6.x uses the in-tree Intel modules, contrary to PVE 5.4.

I see, I was not clear on my thought. ;)
Not 100% how to do this (never done it before, never needed it)
Download code -> compile -> install ?

You linked to the ubuntu repo, however those packages can't be installed as they expect a ubuntu kernel
In general for those kernels from Ubuntu, in each folder there are already pre-build .deb packages, download the kernel and modules package and install them via 'dpkg -i package0 package1'. If you use ZFS as root pool (rpool), then you would need to compile them in.

But as we have already 5.x Kernels for buster, you can download and install them.
http://download.proxmox.com/debian/pve/dists/buster/pve-no-subscription/binary-amd64/

EDIT: could you please also post your network config (ip addr / ip route / ip link)?
 
Last edited:
Everything works fine, NFS still up and running. Adding a new one also works. On Proxmox 5.4-3

Code:
root@proxmox:~# pveversion --verbose
proxmox-ve: 5.4-1 (running kernel: 5.0.8-2-pve)
pve-manager: 5.4-3 (running version: 5.4-3/0a6eaa62)
pve-kernel-4.15: 5.3-3
pve-kernel-5.0.8-2-pve: 5.0.8-2
pve-kernel-5.0.8-1-pve: 5.0.8-1
pve-kernel-4.15.18-12-pve: 4.15.18-35
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-50
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-41
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-25
pve-cluster: 5.0-36
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-19
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-3
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-50
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2

EDIT: could you please also post your network config (ip addr / ip route / ip link)?
Already posted that, look here: https://forum.proxmox.com/threads/mount-no-longer-works-in-proxmox-6-nfs-synology.56503/#post-260584
 
Then we are getting closer, did you try the 5.0.12 & 5.0.15 Kernels already?

I did now ;) was getting late yesterday.
Everything works fine.

Node 3 (less spam as it only has 1 network card) and changed the Mac address
Code:
root@proxmox3:~# ifconfig
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 00:00:00:00:00:00  txqueuelen 1000  (Ethernet)
        RX packets 94109421  bytes 36458469675 (33.9 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 84984009  bytes 19792057636 (18.4 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 30592  bytes 5350262 (5.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 30592  bytes 5350262 (5.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth210i0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether fe:85:ff:72:95:6c  txqueuelen 1000  (Ethernet)
        RX packets 10838  bytes 948418 (926.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 56701  bytes 25582464 (24.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.6  netmask 255.255.255.0  broadcast 192.168.10.255
        inet6 fe80::f64d:30ff:fe67:7482  prefixlen 64  scopeid 0x20<link>
        ether f4:4d:30:67:74:82  txqueuelen 1000  (Ethernet)
        RX packets 84852579  bytes 34638699207 (32.2 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 83844682  bytes 19716619734 (18.3 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 
I did now ;) was getting late yesterday.
Everything works fine.
So, you did test them and it worked? If so, could you please test if the Kernel 5.0.18-1 & 5.0.18-2 show the issue?

enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 00:00:00:00:00:00 txqueuelen 1000 (Ethernet)
This looks strange, there should be a MAC address, as with the other interfaces.
 
I did say in the post that I had changed the Mac address :)
Ah, I didn't take this as a man made change. ;)

To recap, PVE 5.4 with all 5.x kernels work, but PVE 6.x doesn't. Then we can rule the kernel out. Sadly, this doesn't make it any easier. :confused:

Could you please check if the 'rpcbind.service' is running on PVE 6.x node? As this package as been updated too.
 
Running on the nodes and the freshly installed virtual one
For one, could you disable the firewall on the virtual interface (default on), even if it is disabled in the datacenter. And restart the 'rpcbind.service', then try to run the rpcinfo again?
 
For one, could you disable the firewall on the virtual interface (default on), even if it is disabled in the datacenter. And restart the 'rpcbind.service', then try to run the rpcinfo again?

So shutdown the virtual proxmox 6, disable firewall, boot, test --> Fails "no route to host"
 

Attachments

  • Screenshot 2019-08-08 at 13.05.47.png
    Screenshot 2019-08-08 at 13.05.47.png
    73.6 KB · Views: 17
So shutdown the virtual proxmox 6, disable firewall, boot, test --> Fails "no route to host"
Ok, this is really strange. Does the reverse have the same effect?
 
Ok, this is really strange. Does the reverse have the same effect?
you mean rpcinfo from the NAS to the vm? no that works fine

Code:
rpcinfo -p 192.168.10.241
   program vers proto   port  service
    100000    4   tcp    111
    100000    3   tcp    111
    100000    2   tcp    111
    100000    4   udp    111
    100000    3   udp    111
    100000    2   udp    111
 
Hi guys
I follow this thread because I do have a similar issue.

Using Proxmox 5.4 all nodes where able to connect to the PVE-Storage NFS-Store *NFS02* in another (routed) Network.
During upgrade to Proxmox 6, every upgraded Node lost connection to the NFS-Store *NFS02* while the connection to NFS-Store *NFS01* within the same network still is available

There is no Firewall Issue.
A manual mount using fstab: *NFS02*:/export/Datastore4 /mnt/pve/BackupStorage nfs4 defaults,user,exec 0 0 still works

To me it seems to be a Proxmox issue.

Some details:

Proxmox 6 latest
4x Nodes HP Proliant DL360 G7
NodeNet x.x.29.1-255
ClusterNet x.x.39.1-255
CephNet x.x.49.1-255

*NFS01*
OpenmediaVault 4.1.11
1x Node HP Proliant DL380 G6
IP: x.x.29.y (within NodeNet)
NFS Store

*NFS02*
OpenmediaVault 2.2.10
1x Node HP Proliant DL380 G6
ip: a.a.a.a (routed Network)
NFS Store

Regards
Daniel
 
Hm - maybe it's related to the seemingly different versions of OpenmediaVault (2.2.10 not working vs. 4.1.11 working)?
else please post the journal from the node and the respective storage:
* when you try to mount it and it doesn't work via GUI
* when you mount it via shell and it works
also keep an eye open for other problems in the logs

hope this helps!
 
@Yvan Watchman: No Issue with OpenmediaVault 2.2.10 - I think - for its working on OS level. Neither an issue of routing or firewall rules.
@Stoiko: There is not much logging. Mostly snmpd or:

Aug 09 10:34:00 proxmox04 systemd[1]: Starting Proxmox VE replication runner...
Aug 09 10:34:01 proxmox04 systemd[1]: pvesr.service: Succeeded.
Aug 09 10:34:01 proxmox04 systemd[1]: Started Proxmox VE replication runner.

I run journalctl -f while adding the NFS-Storage. Entering ID and IP. Proxmox tries to scan the exports..... failed. No log entry
But Proxmox states: create storage failed: error with cfs lock 'file-storage_cfg': storage 'Backupst' is not online (500)

Manually mounted using fstab:

Aug 09 10:41:35 proxmox04 nfsidmap[3727273]: nss_getpwnam: name 'root@xmedia.loc' does not map into domain 's24-test.loc'
Aug 09 10:41:35 proxmox04 audit[1924]: AVC apparmor="ALLOWED" operation="connect" profile="/usr/sbin/sssd" name="/run/dbus/system_bus_socket" pid=1924 comm="sssd_nss" requested_mask="wr" denied_mask="wr" fsuid=0 ouid=0
Aug 09 10:41:35 proxmox04 audit[1513]: USER_AVC pid=1513 uid=107 auid=4294967295 ses=4294967295 msg='apparmor="ALLOWED" operation="dbus_method_call" bus="system" path="/org/freedesktop/DBus" interface="org.freedesktop.DBus" member="Hello" mask="send" name="org.freedesktop.DBus" pid=1924 label="/usr/sbin/sssd" peer_label="unconfined"
exe="/usr/bin/dbus-daemon" sauid=107 hostname=? addr=? terminal=?'
Aug 09 10:41:35 proxmox04 audit[1513]: USER_AVC pid=1513 uid=107 auid=4294967295 ses=4294967295 msg='apparmor="ALLOWED" operation="dbus_method_call" bus="system" path="/org/freedesktop/systemd1" interface="org.freedesktop.systemd1.Manager" member="LookupDynamicUserByName" mask="send" name="org.freedesktop.systemd1" pid=1924 label="/usr/sbin/sssd" peer_pid=1 peer_label="unconfined"
exe="/usr/bin/dbus-daemon" sauid=107 hostname=? addr=? terminal=?'
Aug 09 10:41:35 proxmox04 kernel: audit: type=1400 audit(1565340095.662:75): apparmor="ALLOWED" operation="connect" profile="/usr/sbin/sssd" name="/run/dbus/system_bus_socket" pid=1924 comm="sssd_nss" requested_mask="wr" denied_mask="wr" fsuid=0 ouid=0
Aug 09 10:41:35 proxmox04 kernel: audit: type=1107 audit(1565340095.662:76): pid=1513 uid=107 auid=4294967295 ses=4294967295 msg='apparmor="ALLOWED" operation="dbus_method_call" bus="system" path="/org/freedesktop/DBus" interface="org.freedesktop.DBus" member="Hello" mask="send" name="org.freedesktop.DBus" pid=1924 label="/usr/sbin/sssd" peer_label="unconfined"
exe="/usr/bin/dbus-daemon" sauid=107 hostname=? addr=? terminal=?'
Aug 09 10:41:35 proxmox04 kernel: audit: type=1107 audit(1565340095.662:77): pid=1513 uid=107 auid=4294967295 ses=4294967295 msg='apparmor="ALLOWED" operation="dbus_method_call" bus="system" path="/org/freedesktop/systemd1" interface="org.freedesktop.systemd1.Manager" member="LookupDynamicUserByName" mask="send" name="org.freedesktop.systemd1" pid=1924 label="/usr/sbin/sssd" peer_pid=1 peer_label="unconfined"
exe="/usr/bin/dbus-daemon" sauid=107 hostname=? addr=? terminal=?'




Daniel
 
Last edited:
nss_getpwnam: name 'root@xmedia.loc' does not map into domain 's24-test.loc'
maybe it's related to the error above?
* you could try if it works with nfs version 3
* please also take a look at the logs of the openmediavault while trying to add/mount it
(unless I misunderstood and proxmox04 is your openmediavault?)

why do you need sssd on your server?
 
OpenmediaVault is the NFS Storage OS - sorry for this -
https://www.openmediavault.org/

I have to figure out where the root@xmedia.loc does origin. the NFS Server is running in domain "xmedia.loc" so this is probably part of the host answer.
I already tried with NFS 3 and 4 and default .

sssd is used for login managed by Microsoft AD without joining linux to the Microsoft AD.


"nss_getpwnam: name 'root@xmedia.loc' does not map into domain 's24-test.loc'" Does not reproduce.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!