Some advice needed...

Steph007

Member
Mar 2, 2016
9
0
21
47
Hi There,

I'm somewhat new to the whole virtualization scene but I do have a little linux experience under the belt. Anyway , I've build a small "NAS" containing my VM images which gets accessed by a proxmox PVE which is hosted on it's own separate hardware. I've configured NFS4 between my NAS and PVE and everything seems to be running in harmony , even under some simulated load. Anyway , would it be considered a crazy idea to use NFS instead of iscsci for access between my PVE and NAS?

NAS config:
OS: Centos 7 with kernel 4.4 optimized for throughput.
NFS4
8 x 250GB Intel SSD (3510) drives running RAID6 on a dedicated Raid controller.
Duel 10Gb/s NICS in both my PVE and NAS (I've bonded the two 10Gb/s interfaces on both PVE and NAS and connected them port to port with a strait CAT6 Ethernet cable. The 10GB/s ports is dedicated for access between the PVE and NAS. I have separate controllers for management on both the PVE and NAS.
16GB Ram
Quad core Intel Xeon processor.

I'm getting around 450MB/s write speed and 5.9GB/s read speed. Is this considered good or should I try and optimize my Raid better and perhaps rather use raid 10 instead of 6 for increased performance? Also , would it help if I increase the MTU to 9000 from the standard 1500 ?

root@pve2:~# dd if=/dev/zero of=/mnt/nas/testfile bs=16k count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 0.598923 s, 448 MB/s

root@pve2:/mnt/nas# dd if=/mnt/nas/testfile of=/dev/null bs=16k
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 0.0455251 s, 5.9 GB/s
root@pve2:/mnt/nas#

Any other advice would be greatly appreciated.

Thanks in advance,
-steph
 
Reading from /dev/zero or writing to /dev/null are both totally invalid I/O benchmarks. Getting ~6GB/s read speed should have already indicated that those numbers cannot be true (even on bonded 10Gb nics, that would be physically impossible!).

Please us a real I/O benchmark suite, or for a quick overview, the pveperf tool that is included in a default PVE install.
 
  • Like
Reactions: sdinet
Here's my results using pvepeft... It also seems like my raid card cache isn't functioning since it requires a BDU / battery of some sorts which I'm guessing will improve my write speed quite a bit ? I'm also considering to change my RAID array from raid 6 to 10 which should give me a significant boost.

root@pve2:/mnt/pve/nas1# pveperf /mnt/pve/nas1/
CPU BOGOMIPS: 76413.44
REGEX/SECOND: 1075326
HD SIZE: 1276.51 GB (10.50.0.2:/home)
FSYNCS/SECOND: 2099.33
DNS EXT: 30.42 ms
DNS INT: 4.60 ms (bbi.co.bw)