Hi There,
I'm somewhat new to the whole virtualization scene but I do have a little linux experience under the belt. Anyway , I've build a small "NAS" containing my VM images which gets accessed by a proxmox PVE which is hosted on it's own separate hardware. I've configured NFS4 between my NAS and PVE and everything seems to be running in harmony , even under some simulated load. Anyway , would it be considered a crazy idea to use NFS instead of iscsci for access between my PVE and NAS?
NAS config:
OS: Centos 7 with kernel 4.4 optimized for throughput.
NFS4
8 x 250GB Intel SSD (3510) drives running RAID6 on a dedicated Raid controller.
Duel 10Gb/s NICS in both my PVE and NAS (I've bonded the two 10Gb/s interfaces on both PVE and NAS and connected them port to port with a strait CAT6 Ethernet cable. The 10GB/s ports is dedicated for access between the PVE and NAS. I have separate controllers for management on both the PVE and NAS.
16GB Ram
Quad core Intel Xeon processor.
I'm getting around 450MB/s write speed and 5.9GB/s read speed. Is this considered good or should I try and optimize my Raid better and perhaps rather use raid 10 instead of 6 for increased performance? Also , would it help if I increase the MTU to 9000 from the standard 1500 ?
root@pve2:~# dd if=/dev/zero of=/mnt/nas/testfile bs=16k count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 0.598923 s, 448 MB/s
root@pve2:/mnt/nas# dd if=/mnt/nas/testfile of=/dev/null bs=16k
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 0.0455251 s, 5.9 GB/s
root@pve2:/mnt/nas#
Any other advice would be greatly appreciated.
Thanks in advance,
-steph
I'm somewhat new to the whole virtualization scene but I do have a little linux experience under the belt. Anyway , I've build a small "NAS" containing my VM images which gets accessed by a proxmox PVE which is hosted on it's own separate hardware. I've configured NFS4 between my NAS and PVE and everything seems to be running in harmony , even under some simulated load. Anyway , would it be considered a crazy idea to use NFS instead of iscsci for access between my PVE and NAS?
NAS config:
OS: Centos 7 with kernel 4.4 optimized for throughput.
NFS4
8 x 250GB Intel SSD (3510) drives running RAID6 on a dedicated Raid controller.
Duel 10Gb/s NICS in both my PVE and NAS (I've bonded the two 10Gb/s interfaces on both PVE and NAS and connected them port to port with a strait CAT6 Ethernet cable. The 10GB/s ports is dedicated for access between the PVE and NAS. I have separate controllers for management on both the PVE and NAS.
16GB Ram
Quad core Intel Xeon processor.
I'm getting around 450MB/s write speed and 5.9GB/s read speed. Is this considered good or should I try and optimize my Raid better and perhaps rather use raid 10 instead of 6 for increased performance? Also , would it help if I increase the MTU to 9000 from the standard 1500 ?
root@pve2:~# dd if=/dev/zero of=/mnt/nas/testfile bs=16k count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 0.598923 s, 448 MB/s
root@pve2:/mnt/nas# dd if=/mnt/nas/testfile of=/dev/null bs=16k
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 0.0455251 s, 5.9 GB/s
root@pve2:/mnt/nas#
Any other advice would be greatly appreciated.
Thanks in advance,
-steph