Poor performance on NFS storage

sgargel

New Member
Nov 23, 2012
19
0
1
Italy
I'm running 3 proxmox 3.4 nodes using NFS shared storage with a dedicated 1GB network switch.

Code:
root@lnxvt10:~# pveversion
pve-manager/3.4-11/6502936f (running kernel: 2.6.32-43-pve)

root@lnxvt10:~# mount | grep 192.168.100.200
192.168.100.200:/mnt/volume0-zr2/proxmox1/ on /mnt/pve/freenas2-proxmox1 type nfs4 (rw,noatime,vers=4,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.100.30,minorversion=0,local_lock=none,addr=192.168.100.200)

My vms are qcow2 based.

I'm experiencing very slow performance.
VMs (both windows and linux) are very slow and usually hangs on iowait but when monitoring the situation on the NAS side there is no such load as expected: ethernet usage is about 20/30 MBit/s.

I don't think the problem is related only to network because iperf get a reasonable speed

Code:
Client connecting to 192.168.100.200, TCP port 5001
TCP window size: 19.6 KByte (default)
------------------------------------------------------------
[  3] local 192.168.100.30 port 56835 connected with 192.168.100.200 port 5001
[ ID] Interval  Transfer  Bandwidth
[  3]  0.0-30.0 sec  3.26 GBytes  933 Mbits/sec

Also dd on the NAS filesystem get a very better result:
Code:
[root@freenas2] /mnt/volume0-zr2/proxmox1# dd if=/dev/zero of=file.dd bs=320M count=10
10+0 records in
10+0 records out
3355443200 bytes transferred in 16.386541 secs (204768244 bytes/sec)

Ok, bottleneck can be the NFS/qcow2 combination, but possible those poor results??
 
And what direct benchmark of filesystem tell you on proxmox host?
For example pveperf /mnt/pve/freenas2-proxmox1
If transfer speed or operations/s are slow then in such moment processor usage is high? Measured using top, htop, atop or also iotop for disk operations?

I don't use server configuration with network storage for server independence and decentralization reasons so I can't say anything about expected performance. But Linux tools should help you identify cause.
 
We test it always with an 4GB isoimage. Copy iso from A to B. That must work with 100-120mb/s with an Gigabitswtich. But with NFS or other storage use 10GB Technology, all other things have not enough power.
 
Code:
root@lnxvt10:~# pveperf /mnt/pve/freenas2-proxmox1/
CPU BOGOMIPS:  105594.48
REGEX/SECOND:  1231068
HD SIZE:  6290.13 GB (192.168.100.200:/mnt/volume0-zr2/proxmox1/)
FSYNCS/SECOND:  14.31
DNS EXT:  52.26 ms
DNS INT:  2002.66 ms (xxx)

Code:
root@lnxvt10:~# pveperf
CPU BOGOMIPS:  105594.48
REGEX/SECOND:  1292481
HD SIZE:  94.49 GB (/dev/mapper/pve-root)
BUFFERED READS:  167.68 MB/sec
AVERAGE SEEK TIME: 8.05 ms
FSYNCS/SECOND:  986.99
DNS EXT:  39.52 ms
DNS INT:  2002.79 ms (xx)

This on the NAS side during pveperf:

freenas_lagg0.png


I think i have to investigate on NFS optimization (jumbo frames, mount opts, etc)...
 
Last edited:
Is there any raid controller on NFS servers? What type of RAID?
Is NFS mounted in synchronous mode or asynchronous mode?

http://www.slashroot.in/how-do-linux-nfs-performance-tuning-and-optimization
I'll read your link.

No raid controller, the storage pool is ZFS Zraid2 on freenas 9.3

Mount option is:
192.168.100.200:/mnt/volume0-zr2/proxmox1/ on /mnt/pve/freenas2-proxmox1 type nfs4 (rw,noatime,vers=4,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.100.30,minorversion=0,local_lock=none,addr=192.168.100.200)


If not specified i think is "sync", true?
 
Usually on low end hardware is better to use local disk for VMs and NFS server/storage for backups (NFS in RAID 1).
sarcasm on//
It will give you less headache and you will not try to re-invent something that already exists (like hot boiling water for example).
sarcasm off//
 
Do you have reverse dns records for you Proxmox hosts?

Please use NFSv3 not v4. NFSv4 hast more features like auth with nis oder ldap and so on but here you need only ip based auth.

Put this in sysctl.conf:

<code>
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
sunrpc.tcp_slot_table_entries = 128
net.ipv4.tcp_max_tw_buckets = 180000
net.ipv4.tcp_mem = 362208 482944 724416
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.udp_mem = 363168 484224 726336
</code>

This should increase your performance.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!