Proxmox VE 3.1 slow NFS reads

deludi

New Member
Oct 14, 2013
26
0
1
Hello all,

I have a 3 node proxmox 3.1 cluster with freenas 9.1.1 storage backend.
Hardware:
Proxmox nodes: Xeon 1240v2, 32GB, 60GB SSD (supermicro hardware)
Freenas node: Xeon 1220L, 32GB, usb stick freenas install, 16x3TB hdd (dataset 1 = 6 x 3TB in raidz2 + 1 Intel S3500 80GB for ZIL, dataset2 = 10 x 3TB in raidz2 + 1 Intel S3500 80GB for ZIL).
Each proxmox node has approximately 5 vm's which reside on an NFS share on dataset 1 on the Freenas node.
I have following problem:

When making a backup of the vms the backup speed is max 10Mb/s which is very slow for the above hardware.
The vms reside on an NFS share on dataset 1 and the backup NFS share is on dataset 2.
To do some testing i have made a clone of a small vm to the local Proxmox SSD.
When i make a backup of that vm the backup speed is very fast at ~250Mb/s.

Seems like the NFS read speed is very poor.
I have tried to mount the NFS shares as UDP instead of TCP, this did not change the slow read problem.

Thank you in advance for your help.
 

Attachments

Last edited:

informant

Member
Jan 31, 2012
674
6
18
Hi,

we have the same issue on our storages. Test with synology storage RS3413XS+ and HP Storages. NFS Share Backups are very slow.

regards
 

deludi

New Member
Oct 14, 2013
26
0
1
Hi Mir,

Thank you for your help.
I have attached a screenshot of the 2 mointpoints to the freenas box (192.168.20.232).
 

Attachments

deludi

New Member
Oct 14, 2013
26
0
1
Hi Tom and Informant,

Thank you fo your input
I have posted a screenshot of the mointpoints in the subthread.
For now i am not shure if it's a proxmox or a freenas configuration issue....
The proxmox install was a clean install without to much fiddling around.
I had some issues with the freenas install:

started with v8.3 x64 on stick
added nfs shares with some fiddling, at first they did not work, this was simply solved by adding the proxmox host ip range to the allowed range...
upgraded to 9.1.1.
added the Intel S3500 Zil SSD's

My test results show that the NFS write speed is very good at a constant 250 Mb (see screenshots) so i tend to look at the proxmox side for the solution
of the slow nfs read speed...




regards
 
Last edited:

mir

Well-Known Member
Apr 14, 2012
3,483
97
48
Copenhagen, Denmark
I do not see any particular problems. You could try using the 'nolock' mount option.
"Specifying the nolock option may also be advised to improve the performance of a proprietary application which runs on a single client and uses filelocks extensively."
 

spirit

Well-Known Member
Apr 2, 2010
3,373
140
63
www.odiso.com
Hi,

I don't known if you are already in production, but can you test with mirroring+striping instead raid-z to compare ?

Also do you have tried to do benchmark with iscsi to compare ?
 

deludi

New Member
Oct 14, 2013
26
0
1
Hello Symmcom,

This is indeed a new setup.
I have had a discussion about it with my partner this morning.
We have 2 identical storage boxes but we use only one right now.
We do have our doubts about freenas + NFS.
We have decided to install napp-it + omnios on the second box and test the NFS performance.
I will post my findings later this week.
@spirit: the 6 disk raidz2 dataset with 80GB SSD Zil is indeed slower than a mirrored solution but should provide enough throughput to fill a Gbit link.
There is also 32GB ECC ram as readcache in the machine. ISCI will indeed perform better because the protocol works totally different, but NFS gives us more flexibility to move vm files, etc.
We will at first however test omnios+napp-it+nfs+proxmox. When that solution works out of the box we will change our storage solution to omnios+napp-it.
@mir: thank you for the input
 

deludi

New Member
Oct 14, 2013
26
0
1
Hi Jason,

Thank you for the input.
i already checked that option: it is not in use in our current setup.
I have good proxmox backup results when i make a backup of a vm on local ssd storage to the nfs backup share (~250Mb/s).
Only when i backup a vm that's on an nfs share to another nfs share the performance is bad.
 

symmcom

Well-Known Member
Oct 28, 2012
1,075
25
48
Calgary, Canada
www.symmcom.com
Hello Symmcom,

This is indeed a new setup.
I have had a discussion about it with my partner this morning.
We have 2 identical storage boxes but we use only one right now.
We do have our doubts about freenas + NFS.
We have decided to install napp-it + omnios on the second box and test the NFS performance.
I know for a fact OmniOS+Napp-It works great with Proxmox out of the box as i have used it for several months before moving to CEPH Storage. But in my case all of my VMs were stored on OmniOS iSCSI storage and backup was done on OmniOS NFS on the same storage machine. If you have not tried yet, try to change the cache of your Proxmox VM to writeback then try to backup and see what happens.
 

deludi

New Member
Oct 14, 2013
26
0
1
Hello all,

I have promised to post the results of omnios+napp-it+nfs+proxmox.
Yesterday i have removed the freenas 9.1.1 usb boot stick from the storage unit.
Replaced it with a 32GB blank usb stick and installed omnios via an ipmi mounted omnios iso.
Installation was straightforward, i only had to disconnect the 2 LSO m1015's and the 2 Zil SSD's because the omnios installer did not see the USB stick.
After the omnios installation i connected the M1015's and the Zil SSD's.
In the bios i set the usb stick as primary boot.
In omnios i have installed the network and after that the wget....installation of napp-it.
After logging in into napp-it i have imported the zfs pools, enabled nfs on 1 dataset and set the acl on everyone rwx.
After that i tested again with a backup from local storage to the napp-it nfs storage => ~250Mb/s, this is about the same as freenas.
Then a made a clone of a vm to the napp-it zfs store.
I started a backup of that clone with the same nfs store as target => 70Mb/s which is very acceptable considered that the vm and backup are on the same store.
The interface of napp-it is not as polished as freenas but does the job well.
I still have to upgrade the zfs pool version.
What i like is that is possible to create a mirrored usb boot stick with napp-it.
Overall i am happy with the transfer from freenas to napp-it, next week i will convert our second storage box to omnios+napp-it.
Thank you al for your help.

Best regards,

Dirk Adamsky

P.S. funny detail: i was the person that informed Gea (napp-it developer) of Omnios a year ago.......:D
Now i am finally over on napp-it+omnios.

http://forums.servethehome.com/solaris-nexenta-openindiana-napp/605-omnios.html
 
Last edited:

mir

Well-Known Member
Apr 14, 2012
3,483
97
48
Copenhagen, Denmark
Hints for performance tuning ZFS on Omnios:
1) Enable lz4 compression to increase write speed. Compression will reduce the amount of data to be actually written to disks and given today's CPU speed compression will not be a burden for the CPU. 15-30% improvement in write performance is possible.
2) If your storage is protected by BBU and/or UPS you should disable synchronized writes. Remember to leave checksum on. 300-600% improvement in write performance is possible
3) On pool level disable atime. 5-10% improvement in read performance is possible.

If you want to increase IOPS, which is more or less a requirement for virtualization and databases, you should use stripped mirrors, aka RAID 10, for your pools. The more vdevs per pool the higher number of IOPS (assemble RAID 10 pools with 6-8 vdevs of 3 disk mirrors for maximum security. writes will be stripped across your vdevs as well as reads will be read in chunks from all vdevs). raidz(x) is considerably bad for IOPS. My recommendation is to use RAID 10 for life VM's and CT's and use raidz(x) for backups, templates, and images.

For NFS remember these settings for ACL if you intend to use NFS for backing for CT's:
aclmode passthrough
aclinherit passthrough-x
 
Last edited:

deludi

New Member
Oct 14, 2013
26
0
1
Hi Mir,

Thank you for the excellent reply.
ad 1. i will enable lz4 compression, proxmox cpu is Intel Xeon E3-1240V2 with enough headroom
ad 2. i have added the Intel S3500 Zil's a week ago to improve write performance after doing extensive reading about sync or async. Gea from napp-it also advises sync. The storage boxes are indeed in a datacenter with very small chance of power problems. The vms are production vms and i am afraid of corrupting them. Do you run production vms on async nfs shares?
ad 3. i have read that disabling atime helps with a lot of small files, does ik also help for vms?

You are absolutely right about the raid type. I have chosen the wrong raid type for the zfs vm store.
I will change the 6 disk store to a raid 10 store for better iops.

For now i do not use CT's but i will remember your aclmode advise.

Best regards,

Dirk
 
Last edited:

mir

Well-Known Member
Apr 14, 2012
3,483
97
48
Copenhagen, Denmark
ad 2. i have added the Intel S3500 Zil's a week ago to improve write performance after doing extensive reading about sync or async. Gea from napp-it also advises sync. The storage boxes are indeed in a datacenter with very small chance of power problems. The vms are production vms and i am afraid of corrupting them. Do you run production vms on async nfs shares?
Of course if you have a separate Zil then things change since you will then have decoupled the write cache from the file system. Using separate Zil introduces another requirement for security. When using a separate Zil you must put the Zil in a mirror because loosing the Zil will in worst case mean loosing the entire pool.

Since I don't use separate Zil I run async nfs shares and iSCSI volumes. UPS takes care of data integrity.

I must admit that I have split feelings regarding separate Zil since the gained performance speed does not out way the improved complexity and given that my amount of RAM is more than adequate to support my needs for speed.
ad 3. i have read that disabling atime helps with a lot of small files, does ik also help for vms?
Consider this: Every time a read or write is done to the vm's image the file system must update the atime for the image. How often do you think this happens? What need do you have for the atime information when information of the file system on the storage side is opaque to the OS?
You are absolutely right about the raid type. I have chosen the wrong raid type for the zfs store.
I will change the 6 disk store to a raid 10 store for better iops.
Yep, expect to see at least 100% increase in IOPS;-)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!