Bad NFS Performance

cpzengel

Renowned Member
Nov 12, 2015
221
27
93
Aschaffenburg, Germany
zfs.rocks
Hi,

we moved from ESXi 5.5 to Proxmox VE 5.2-2

Our Storage is a Netgear RD5200 based on Nexenta V3, afaik

After running Machines on PVE with Virtio HDDs and Nic we have a dramatic worse Performance, especially on latency
Actually the Disks are running without any cache.
The worst Behavior was with writeback cache set.
The Connection is a Broadcom based SFP+ with 10Gbit

Our Exchange Server is not as responsive than before.
Only deactivating sync made our performance more close than before.

The default Mount is

172.16.1.251:/Raid5a/SRVDATA-Profiles on /mnt/pve/PROFILES-251 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.1.251,mountvers=3,mountport=47443,mountproto=udp,local_lock=none,addr=172.16.1.251)
So any Suggestions to improve the Performance?
Please advice

Chriz
 
I think it's safe to mount nfs async, as the zfs is doing its job with zil.
Do you have chosen raw for the image format?
Don't use qcow2 on zfs via nfs!
 
hi knuut. there is no way to convert to vmdk right now. we have to skip a year for the new exchange 2016/19 whatever to migrate and the vmdks are about 2-4tb and they´re working 12h per day.

i tried to mount a freenas machine with 4k instead of 100k

the default of freenas seems to be rsize=131072,wsize=131072
the readydatas default 100k?
All is ZFS and NFS

ReadyData default
172.16.1.251:/Raid10/SRVEX2010 on /mnt/pve/SRVEX2010-251 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.1.251,mountvers=3,mountport=47443,mountproto=udp,local_lock=none,addr=172.16.1.251)


FreeNAS 11 default
172.16.1.74:/mnt/Raid5/DOCUSERV-HDD-KLON on /mnt/pve/DOCUSERV-KLON type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.1.74,mountvers=3,mountport=929,mountproto=udp,local_lock=none,addr=172.16.1.74)

FreeNAS with PVE Options
172.16.11.23:/mnt/Raid5/DOCUSERV-HDD on /mnt/pve/DOCUSERV-HDD type nfs (rw,relatime,vers=3,rsize=4096,wsize=4096,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.11.23,mountvers=3,mountport=843,mountproto=udp,local_lock=none,addr=172.16.11.23)

172.16.11.23:/mnt/Raid10/DOCUSERV-SSD on /mnt/pve/DOCUSERV-SSD type nfs (rw,relatime,vers=3,rsize=4096,wsize=4096,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.11.23,mountvers=3,mountport=843,mountproto=udp,local_lock=none,addr=172.16.11.23)

First experience seems to be with a better result in latency
What do you think is the best way for SQL 2008 R2 and Exchange 2010?

Thanks so far!

Chriz
 
Maybe switching from NFS to iSCSI with ZVOLs would be also an alternative. Anyway, converting the image away from vmdk is mandatory imho.
You need high I/O for Exchange, SQL etc. Take a look on your zpool setup. Prefer striped mirrors over RAIDZ whatever. Also get a NVMe for ZIL and L2ARC.

Do you also installed virtio drivers? If no, do so.

BTW: If you can afford 12h downtime, this would be enough for convertting the images.
 
So iSCSI is not an Option because Proxmox decides then to use the 1GBit LAN Interface to route primary:( There is no way to disable this Path in Proxmox or ReadyData and I have to move Data that will be obsolete in one year:(
Our Raid Levels are 10 and 50 with read Cache. The Problem is that its worse than before. An NVMe for Cache also is not an Option on a ReadyData from Netgear thats still, lets call it "supported". Netgear Shame on you!
 
Do you have any tests?

I think you should tune the sysctl settings.

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.ipv4.tcp_mem = 662208 882944 724416
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.udp_mem = 363168 484224 726336
sunrpc.tcp_slot_table_entries = 128

Please apply these settings and test you performance again.

Please do not user RAID5. Use RAID10 then you will get a better performance.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!