Freenas NFS Storage slow read performance with ATTO 64KB+

Attilas

Renowned Member
Dec 17, 2013
10
0
66
Hey guys,

First of all, just want to thanks you guys for ProxmoxVE pieces of softwares. Just discovered it (Don't know why took so much time) last weekend. I have been using VMWare ESXI 5.1 Free (and previous) version since 2 years for personal servers and projects at home. Just converted both of my server to test out Proxmox live migration and clustering and I must say im more than impressed. Everything works smoothly, cleanly and without errors !

Anyway, im almost up to the point to swap ESXI for Proxmox but I keep getting some strange performance results with my shared storage on Freenas NFS. I have been using NFS with ESXI for year without any performance issues.

Here is the ATTO result i have from guest on Proxmox and from ESXI running on the same hardware. No mather what I try to do on Proxmox, as soon as ATTO get into 64KB+ block size performance seem to strangely CAP Read at 10MB/sec. I have the same result on any of my two host using Windows XP or Windows 2012 Guest Proxmox. Look only i have problem on Windows Guest while using NFS...

Things I tried:

- All type of file format - Still slow
- Both host from shared NFS - Both slow
- VirtIO - Still slow
- Local storage - Good, speed is normal
- dd benchmark directly on NFS using SSH with 1M block - Good, speed is normal
- dd benchmark in a Centos Guest with 1M block - Good, speed is normal - Was wrong on this one. Late test with much bigger datasize got me into slow read speed. Probably Proxmox is doing some caching on his side for reading !?

Any Idea ?

VMWare NFS Freenas.pngProxmox NFS Freenas.png
 
Last edited:
Hello diet mar, thanks for the reply.

I just tried all possible cache for both drive I have on this Windows XP Guest. And still have the same strange result.

Any test you suggest to try to pin down the bottleneck ?

Thank you very much
 
I didn't reboot the VM. This time I shutdown the VM, changed setting and rebooted VM.

I still have the exact same problem with all cache settings (Except some slight performance change, as expected with the cache settings). But still the huge drop in read performance caped at 5-10Mb/sec.

This problem is happening on another installation of WindowXP and also under Windows 2012 Server.
 
To be honest I didn't do any change on ESXI. Simply added NFS Storage using single nic on a dedicated intel network card. Both of my host have intel dual port network card as well as the Freenas Storage. So all NFS are connected on dedicated nic and subnet ips.

If there is any option on ESXI I am not aware of them and I simply used default settings.

Will try some more benchmark tonight...
 
here are my ATTO results. I am testing against a Debian Wheezy NFS server, in a 1 Gbit network.
I am using a WinXP VM, disks are raw files (cache=none) - one on local storage (raid10) and the other one on NFS.

ATTO-on-XP-local-wheezy-virtio-raid10.pngATTO-on-XP-nfs-wheezy-virtio-1Gbit-net.png

I am not using FreeNAS here, but there are a lot of users reporting also slow/strange NFS performance with FreeNAS (search forum).

If you need a zfs based NAS, try OmniOS (with napp-it).
 
Thanks for the reply again. That are the result i would like to have with Proxmox -> NFS.

I will try to upgrade to last version of Freenas tonight and see if it get any better.

I may need to swap NAS software but I have been using Freenas without any issue since 2 years and im kind of used to it. I know a lot of people do have performance problem, but it tend to be related to not respecting Freenas requirements. Here is my NAS setup:

Quad Xeon 3Ghz
16 Gb ECC Memory
LSI SAS SATA 4 Ports controller
LSI MegaRaid 8 Ports controller
Dual Intel Server Nic
Single Intel Server Nic (Management)
12 SATA hotswap case

This setup should not see 5MB-10MB /sec read performance unless there is software conflit i think.
 
Or try NAS4Free which is based on FreeBSD 9.x instead of FreeNAS which is based on FreeBSD 8.x

Thanks for suggestion. I could give it a try. But still I can achieve good read performance while using DD directly on the mount NFS drive in Proxmox.

Here is some DD results from mounted NFS drive on one of my Proxmox host. It look like Proxmox is able to write and read quickly from my NAS.

I have 4 write test then 4 read test:

Screen Shot 2013-12-19 at 7.30.20 AM.jpg
 
All right found some fix here.

According to another post from dietmar (http://forum.proxmox.com/threads/6964-Change-NFS-mount-options), I played a bit with my local Proxmox NFS settings under /etc/pve/storage.cfg.

Default ProxmoxVE 3.1 mount result:

192.168.10.45:/mnt/ds1/pve on /mnt/pve/SAN02 type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.10.45,mountvers=3,mountport=926,mountproto=udp,local_lock=none,addr=192.168.10.45)

Modified settings (/etc/storage.cfg):

nfs: SAN02
path /mnt/pve/SAN02
server 192.168.10.45
export /mnt/ds1/pve
options rw,udp,rsize=4096,wsize=4096,hard,intr,noatime
content images,iso,vztmpl,rootdir,backup
maxfiles 1

Modified mount result:

192.168.10.45:/mnt/ds1/pve on /mnt/pve/SAN02 type nfs (rw,noatime,vers=3,rsize=4096,wsize=4096,namlen=255,hard,proto=udp,timeo=11,retrans=3,sec=sys,mountaddr=192.168.10.45,mountvers=3,mountport=926,mountproto=udp,local_lock=none,addr=192.168.10.45)




rw-udp-rsize=4096-wsize=4096-hard-intr-noatime.png


With these settings I still have lower performance than ESXI (Without changing anything from FreeNAS), but I least I get rid of those slow performance on 64K+ read. Will try to isolate the settings.

EDIT:

Allright, nailed it down to rsize=65536 options to start giving low 64K+ read transfer with my setup. UDP setting was faster than TCP (And default rsize=16384 at the same time) but I could not go higher than rsize=16384 / wsize=16384 with udp. TCP was able to up to 32768. Using the following settings gave me the best result up to now:

Modified settings #2 (/etc/storage.cfg):

options rw,tcp,rsize=32768,wsize=32768,hard,intr,noatime

Windows XP VirtIO Result

rw-tcp-rsize=32768-wsize=32768-hard-intr-noatime VIRTIO.png

Windows 2012 VirtIO Result

Proxmox Windows 2012 - Freenas Raidz6 - UDP 16K - Virtio.png
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!