Issue with NFS

W1T3C

New Member
Apr 6, 2023
4
0
1
Hi!

Sorry if this is the wrong place to ask.
I'm new in linux administratioin, so I'm asking for your understanding.

One of my clusters have some issues with NFS share used for VZDump backup.

It look like time needed to respond by NFS serwer is jumping.
Code:
time ls  /mnt/pve/backup2_storage1
dump

real      7m46.929s
user      0m0.003s
sys      0m0.001s

time ls  /mnt/pve/backup2_storage1
dump

real    0m1.186s
user    0m0.000s
sys     0m0.002s

ls  /mnt/pve/backup2_storage1
dump

real    0m0.459s
user    0m0.000s
sys     0m0.002s

ls  /mnt/pve/backup2_storage1
dump

real    0m0.459s
user    0m0.000s
sys     0m0.002s

time ls  /mnt/pve/backup2_storage1
dump

real    0m31.230s
user    0m0.002s
sys     0m0.000s


but pvesm nfsscan is quite consistant.
Code:
time pvesm nfsscan xxx.xxx.xxx.xxx
/mnt/storage1/rsnapshot/firma5 yyy.yyy.yyy.yyy

real    0m0.485s
user    0m0.430s
sys     0m0.051s

time pvesm nfsscan xxx.xxx.xxx.xxx
/mnt/storage1/rsnapshot/firma5 yyy.yyy.yyy.yyy

real    0m0.489s
user    0m0.438s
sys     0m0.048s

time pvesm nfsscan xxx.xxx.xxx.xxx
/mnt/storage1/rsnapshot/firma5 yyy.yyy.yyy.yyy

real    0m0.489s
user    0m0.446s
sys     0m0.040s

time pvesm nfsscan xxx.xxx.xxx.xxx
/mnt/storage1/rsnapshot/firma5 yyy.yyy.yyy.yyy

real    0m0.499s
user    0m0.443s
sys     0m0.053s

time pvesm nfsscan xxx.xxx.xxx.xxx
/mnt/storage1/rsnapshot/firma5 yyy.yyy.yyy.yyy

real    0m0.523s
user    0m0.457s
sys     0m0.064s

Code:
mount | grep bac
yyy.yyy.yyy.yyy:/mnt/storage1/rsnapshot/firma5 on /mnt/pve/backup2_storage1 type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=xxx.xxx.xxx.xxx,local_lock=none,addr=yyy.yyy.yyy.yyy)

On backup serwer NFS Firewall accept all connections from proxmox serwer.
Code:
iptables -L -n -v | grep xxx.xxx.xxx.xxx
 7792  464K ACCEPT     all  --  *      *       xxx.xxx.xxx.xxx        0.0.0.0/0            /* firma5 for nfs */

Code:
nfsstat –c
Client rpc stats:
calls      retrans    authrefrsh
12793333   138002     12793761

Client nfs v3:
null             getattr          setattr          lookup           access           
4         0%     262871    2%     112649    0%     119398    0%     18501     0%     
readlink         read             write            create           mkdir           
0         0%     982       0%     9738069  76%     108938    0%     655       0%     
symlink          mknod            remove           rmdir            rename           
0         0%     0         0%     108658    0%     647       0%     1292      0%     
link             readdir          readdirplus      fsstat           fsinfo           
0         0%     0         0%     1297      0%     943109    7%     8         0%     
pathconf         commit           
4         0%     1322661  10%     

Client nfs v4:
null             read             write            commit           open             
46        0%     96        0%     38330    70%     9433     17%     72        0%     
open_conf        open_noat        open_dgrd        close            setattr         
0         0%     428       0%     0         0%     532       0%     101       0%     
fsinfo           renew            setclntid        confirm          lock             
56        0%     0         0%     0         0%     0         0%     0         0%     
lockt            locku            access           getattr          lookup           
0         0%     0         0%     96        0%     1954      3%     367       0%     
lookup_root      remove           rename           link             symlink         
12        0%     67        0%     12        0%     0         0%     0         0%     
create           pathconf         statfs           readlink         readdir         
7         0%     44        0%     1264      2%     0         0%     23        0%     
server_caps      delegreturn      getacl           setacl           fs_locations     
100       0%     328       0%     0         0%     0         0%     0         0%     
rel_lkowner      secinfo          fsid_present     exchange_id      create_session   
0         0%     0         0%     0         0%     20        0%     10        0%     
destroy_session  sequence         get_lease_time   reclaim_comp     layoutget       
9         0%     727       1%     0         0%     10        0%     0         0%     
getdevinfo       layoutcommit     layoutreturn     secinfo_no       test_stateid     
0         0%     0         0%     0         0%     12        0%     0         0%     
free_stateid     getdevicelist    bind_conn_to_ses destroy_clientid seek             
0         0%     0         0%     91        0%     9         0%     0         0%     
allocate         deallocate       layoutstats      clone           
0         0%     0         0%     0         0%     0         0%

On backup serwer load is quite consistent and io demanding tasks use ionice -c 3 nice -n20
Any hint where to look for the source of the problem will be useful...
Thx
 
post some information about the nfs server. storage setup, os,fs

Code:
root@backup2:/home/user# exportfs 
/mnt/storage1/rsnapshot/firma5
yyy.yyy.yyy.yyy[CODE]
Code:
root@backup2:/home/user# cat /etc/os-release 
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
Code:
root@backup2:/home/user# df -hT
Filesystem                   Type   Size  Used Avail Use% Mounted on
zstoragepool                 zfs     84T   83T  800G 100% /mnt/storage1
Code:
root@backup2:/home/user# zpool status
  pool: zstoragepool
 state: ONLINE
  scan: scrub repaired 0B in 20 days 07:35:22 with 0 errors on Sat Apr  1 08:59:59 2023
config:

        NAME          STATE     READ WRITE CKSUM
        zstoragepool  ONLINE       0     0     0
          raidz2-0    ONLINE       0     0     0
            sda       ONLINE       0     0     0
            sdb       ONLINE       0     0     0
            sdc       ONLINE       0     0     0
            sdd       ONLINE       0     0     0
            sde       ONLINE       0     0     0
            sdf       ONLINE       0     0     0
            sdg       ONLINE       0     0     0
            sdh       ONLINE       0     0     0
            sdi       ONLINE       0     0     0
            sdj       ONLINE       0     0     0
            sdk       ONLINE       0     0     0
            sdl       ONLINE       0     0     0

errors: No known data errors
Code:
root@backup2:/home/user# systemctl list-units --type=service | grep nfs
  nfs-blkmap.service                                                                        loaded active running pNFS block layout mapping daemon
  nfs-idmapd.service                                                                        loaded active running NFSv4 ID-name mapping service
  nfs-mountd.service                                                                        loaded active running NFS Mount Daemon
  nfs-server.service                                                                        loaded active exited  NFS server and services
  nfsdcld.service                                                                           loaded active running NFSv4 Client Tracking Daemon
 
You should ask yourself why one cluster behave different than the others. what's the difference between them.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!