Hi!
Sorry if this is the wrong place to ask.
I'm new in linux administratioin, so I'm asking for your understanding.
One of my clusters have some issues with NFS share used for VZDump backup.
It look like time needed to respond by NFS serwer is jumping.
but pvesm nfsscan is quite consistant.
On backup serwer NFS Firewall accept all connections from proxmox serwer.
On backup serwer load is quite consistent and io demanding tasks use ionice -c 3 nice -n20
Any hint where to look for the source of the problem will be useful...
Thx
Sorry if this is the wrong place to ask.
I'm new in linux administratioin, so I'm asking for your understanding.
One of my clusters have some issues with NFS share used for VZDump backup.
It look like time needed to respond by NFS serwer is jumping.
Code:
time ls /mnt/pve/backup2_storage1
dump
real 7m46.929s
user 0m0.003s
sys 0m0.001s
time ls /mnt/pve/backup2_storage1
dump
real 0m1.186s
user 0m0.000s
sys 0m0.002s
ls /mnt/pve/backup2_storage1
dump
real 0m0.459s
user 0m0.000s
sys 0m0.002s
ls /mnt/pve/backup2_storage1
dump
real 0m0.459s
user 0m0.000s
sys 0m0.002s
time ls /mnt/pve/backup2_storage1
dump
real 0m31.230s
user 0m0.002s
sys 0m0.000s
but pvesm nfsscan is quite consistant.
Code:
time pvesm nfsscan xxx.xxx.xxx.xxx
/mnt/storage1/rsnapshot/firma5 yyy.yyy.yyy.yyy
real 0m0.485s
user 0m0.430s
sys 0m0.051s
time pvesm nfsscan xxx.xxx.xxx.xxx
/mnt/storage1/rsnapshot/firma5 yyy.yyy.yyy.yyy
real 0m0.489s
user 0m0.438s
sys 0m0.048s
time pvesm nfsscan xxx.xxx.xxx.xxx
/mnt/storage1/rsnapshot/firma5 yyy.yyy.yyy.yyy
real 0m0.489s
user 0m0.446s
sys 0m0.040s
time pvesm nfsscan xxx.xxx.xxx.xxx
/mnt/storage1/rsnapshot/firma5 yyy.yyy.yyy.yyy
real 0m0.499s
user 0m0.443s
sys 0m0.053s
time pvesm nfsscan xxx.xxx.xxx.xxx
/mnt/storage1/rsnapshot/firma5 yyy.yyy.yyy.yyy
real 0m0.523s
user 0m0.457s
sys 0m0.064s
Code:
mount | grep bac
yyy.yyy.yyy.yyy:/mnt/storage1/rsnapshot/firma5 on /mnt/pve/backup2_storage1 type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=xxx.xxx.xxx.xxx,local_lock=none,addr=yyy.yyy.yyy.yyy)
On backup serwer NFS Firewall accept all connections from proxmox serwer.
Code:
iptables -L -n -v | grep xxx.xxx.xxx.xxx
7792 464K ACCEPT all -- * * xxx.xxx.xxx.xxx 0.0.0.0/0 /* firma5 for nfs */
Code:
nfsstat –c
Client rpc stats:
calls retrans authrefrsh
12793333 138002 12793761
Client nfs v3:
null getattr setattr lookup access
4 0% 262871 2% 112649 0% 119398 0% 18501 0%
readlink read write create mkdir
0 0% 982 0% 9738069 76% 108938 0% 655 0%
symlink mknod remove rmdir rename
0 0% 0 0% 108658 0% 647 0% 1292 0%
link readdir readdirplus fsstat fsinfo
0 0% 0 0% 1297 0% 943109 7% 8 0%
pathconf commit
4 0% 1322661 10%
Client nfs v4:
null read write commit open
46 0% 96 0% 38330 70% 9433 17% 72 0%
open_conf open_noat open_dgrd close setattr
0 0% 428 0% 0 0% 532 0% 101 0%
fsinfo renew setclntid confirm lock
56 0% 0 0% 0 0% 0 0% 0 0%
lockt locku access getattr lookup
0 0% 0 0% 96 0% 1954 3% 367 0%
lookup_root remove rename link symlink
12 0% 67 0% 12 0% 0 0% 0 0%
create pathconf statfs readlink readdir
7 0% 44 0% 1264 2% 0 0% 23 0%
server_caps delegreturn getacl setacl fs_locations
100 0% 328 0% 0 0% 0 0% 0 0%
rel_lkowner secinfo fsid_present exchange_id create_session
0 0% 0 0% 0 0% 20 0% 10 0%
destroy_session sequence get_lease_time reclaim_comp layoutget
9 0% 727 1% 0 0% 10 0% 0 0%
getdevinfo layoutcommit layoutreturn secinfo_no test_stateid
0 0% 0 0% 0 0% 12 0% 0 0%
free_stateid getdevicelist bind_conn_to_ses destroy_clientid seek
0 0% 0 0% 91 0% 9 0% 0 0%
allocate deallocate layoutstats clone
0 0% 0 0% 0 0% 0 0%
On backup serwer load is quite consistent and io demanding tasks use ionice -c 3 nice -n20
Any hint where to look for the source of the problem will be useful...
Thx