SAN/NAS Benchmark - Good speeds?

hotwired007

Member
Sep 19, 2011
533
7
16
UK
Hi Guys,

I've spent today running benchmarks to find out whether my NAS (ReadyNAS 2100 4x 1TB) is running as fast as i need and whether iSCSI or NFS is the fastest method - I have already decided that i'm going to have to use iSCSI so that my backups work correctly but i was interested in getting some results to compare. I've used the newest version of CrystalDisk (3.0) on a number of drives:

Windows 2003 - Proxmox 1.9 - (NFS) - C:\ (IDE)
50MB1000MB
Read (MB/s)Write (MB/s)Read (MB/s)Write (MB/s)
71.5540.4364.5235.01
71.8640.4934.5925.65
6.3256.0651.2491.265
6.7386.6321.4961.475

Windows 2008 - Proxmox 1.9 - (iSCSI) C:\ (IDE)
50MB1000MB
Read (MB/s)Write (MB/s)Read (MB/s)Write (MB/s)
66.3350.7758.2946.56
66.6151.5531.5549.47
6.8066.8041.7676.655
6.9916.9311.8086.974

1.5TB SEAGATE SATA 3
50MB1000MB
Read (MB/s)Write (MB/s)Read (MB/s)Write (MB/s)
111.689.71100.296.12
43.3278.4227.8345.25
0.6680.9280.3130.62
1.4270.8420.5650.558

Dell Mirrored RAID Dual 1TB Drives
50MB1000MB
Read (MB/s)Write (MB/s)Read (MB/s)Write (MB/s)
82.6676.71101.7101.4
42.2281.7527.8145.1
0.7830.5770.3440.69
3.0760.5351.6190.637

USB 2.0 - 320GB WESTERN DIGITAL
50MB1000MB
Read (MB/s)Write (MB/s)Read (MB/s)Write (MB/s)
27.6626.2329.1127.82
18.5127.7417.6223.96
0.3781.3870.3310.752
0.5391.4090.4490.791

Although on the NAS box i have slower sequential read/writes than an individual local drive, it is faster than anything else on the other benchmark tests.

Currently its running only 1x 1Gb network port, would having the second enabled help to improve this?

Can anyone else validate this? has anyone got anything else to compare against?
 
Last edited:
Your limit is cased by the network layer.

In my experience... 1gb links are not fast enough.

Local disk: 300gbs
1gb link: 90gbs if you are lucky!

So.... Bonding helps. 2 cards will give you 150gps or close to that.

If you want speed.... Fibre channel is really the only answer :)


Sent from my iPhone using Tapatalk
 
Sorry... That is exactly what I meant.

I should read what I write more carefully before submitting it!


Sent from my iPhone using Tapatalk
 
Local disk: 300gbs
1gb link: 90gbs if you are lucky!

So.... Bonding helps. 2 cards will give you 150gps or close to that.

I've bonded the two ports on the SAN and on the proxmox box.

From online calculations the MAXIMUM should be 250MB/s.

I'm getting 64.25MB/s read and 30.97MB/s write

What other tweaks can i do to increase performance?

I'm looking a building a database server on this system but with that kind of IO it'd end up getting bogged down...
 
Last edited:
I've bonded the two ports on the SAN and on the proxmox box.

From online calculations the MAXIMUM should be 250MB/s.

I'm getting 64.25MB/s read and 30.97MB/s write

What other tweaks can i do to increase performance?

I'm looking a building a database server on this system but with that kind of IO it'd end up getting bogged down...

Are you using NFS? or ISCSI?

And how have you made that caluclation of speed?

The only system I know that gets near wire speed is good ol' ftp. And even that would not be 250MB/s

Remember.. complex protocols. (SMB,SSH,NFS,ISCSI,etc) have lots of hand shaking. End result is you loose bandwith just by the need to use them.

I did say this a few posts back.. but dont expect wonders.

With NFS, you can turn off imediate writing to disk. This means that the client can be freed up while letting the nfs do its work in the background. My experience says this is HIGHLY risky and liable to loose data if you crash the system mid write.

There are also numerous tweaks to the TCP stack that can help.. but I dont think you will ever get the speed you want without Fibre Channel.

A year or two back I purchased a massic 16tb raid system. Set the whole thing up with nfs etc...

2 months on I ditched it because the performance of the IO sensitive apps was so rubbish. (nas had 4 bonded 1gb links) (client had 2gb links bonded).

I have now gone back to local storage. My next move will be local - with drbd sync. This will be a good option without doing the full enterprise san :)

Rob
 
I was hoping to get atleast 90MB/s with the bonding, there doesnt seem to be any performance increase at all.

The connnections are NFS currently, although i'm going to change to iSCSI.
 
Rob, your comment: "My next move will be local - with drbd sync. This will be a good option without doing the full enterprise san" ..... would that not put you back to the network being the limiting factor in performance?
 
found out that my switch (netgear GS724T) doesnt support the full 802.3ad so have left it at ALB, everything seems to run fine aslong as the servers aren't doing loads of IO, have decided that high IO servers will run from local disks rather than network storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!