NFS v4.2 multipathing

baudneo

Member
Nov 7, 2020
10
0
6
Mars 2032
www.zoneminder.com
Hi, I am trying to set up NFS 4.2 multipathing without success.

I have an R540 with a test ZFS pool that is being served over NFS. Locally when I test the pool I can get 2.6GB/s. The R540 has 2x 10Gb ports with IP's 10.0.1.92 and 93. As far as I understand, the server needs no special configuration but I did adjust some system tunables and increase the NFSD count from 8 to 10.

Bash:
╰─❯ dd if=/dev/zero of=/nvr/test.img bs=1M count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 1.98123 s, 2.6 GB/s

Bash:
# /etc/sysctl.d/nfs-server.conf
#- See https://docs.google.com/document/u/0/d/11mAKbb6BRi0lv6mawOTzcG1GprzB4UubaHRM8XixjrU/mobilebasic?pli=1

#- 1Gb with 32MB socket buffers
#net.core.rmem_max = 33554432
#net.core.wmem_max = 33554432
#net.ipv4.tcp_rmem = 4096 87380 33554432
#net.ipv4.tcp_wmem = 4096 65536 33554432

#- 10Gb with 320MB socket buffers (2GB max)
net.core.rmem_max = 335544320
net.core.wmem_max = 335544320
net.ipv4.tcp_rmem = 4096 87380 335544320
net.ipv4.tcp_wmem = 4096 65536 335544320

net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_no_metrics_save = 1
#- configure as needed
net.core.netdev_max_backlog = 30000

net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr


Bash:
# /etc/defaults/nfs-kernel-server
# Number of servers to start up
RPCNFSDCOUNT=10

# Runtime priority of server (see nice(1))
RPCNFSDPRIORITY=0

# Options for rpc.mountd.
# If you have a port-based firewall, you might want to set up
# a fixed port here using the --port option. For more information,
# see rpc.mountd(8) or http://wiki.debian.org/SecuringNFS
# To disable NFSv4 on the server, specify '--no-nfs-version 4' here
RPCMOUNTDOPTS="--manage-gids"

# Do you want to start the svcgssd daemon? It is only required for Kerberos
# exports. Valid alternatives are "yes" and "no"; the default is "no".
NEED_SVCGSSD=""

# Options for rpc.svcgssd.
RPCSVCGSSDOPTS=""

# Listen on certain interfaces
#RPCNFSDOPTS="-H 10.0.20.2 -H 10.0.30.1"


On the client side, I can only saturate 1 10Gb link. The client has 2x 10Gb ports with IP's 10.0.1.9 and 10.

I manually mounted the nfs share twice using both of the R540 IPs and both of the clients IP's using clientaddr option.

Code:
❯ sudo mount -t nfs4 -o async,_netdev,nconnect=8,clientaddr=10.0.1.9 10.0.1.92:/nvr /nvr && sudo mount -t nfs4 -o async,_netdev,nconnect=8,clientaddr=10.0.1.10 10.0.1.93:/nvr /nvr
❯ dd if=/dev/zero of=/nvr/test.img bs=1M count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 5.5765 s, 940 MB/s

I hope I am missing something obvious. Any insight would be appreciated.

TIA!
 
I also tried the openeuler docs but, obviously those only apply to openeuler kernel which has a nfs_multipath.ko module:

Bash:
❯ sudo mount -t nfs -o localaddrs=10.0.1.9~10.0.1.10,remoteaddrs=10.0.1.92~10.0.1.93 10.0.1.92:/nvr /nvr
Created symlink /run/systemd/system/remote-fs.target.wants/rpc-statd.service → /usr/lib/systemd/system/rpc-statd.service.
mount.nfs: an incorrect mount option was specified for /nvr
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!