30K$ Budget to expand our setup

Jul 14, 2011
23
0
21
Canada
Hello,

We have 30K$ budget to expand a current setup and I'm looking for advices about the storage. Right now we have 4 servers working as a promox VE 1.9.x cluster with the following specs :

1U SuperMicro Barebone
Dual Intel Xeon X5672 Processors
48GB RAM
4 X 320GB 15K
Adaptec 6504 Raid-10
Dual 1GiG Ethernet

The cluster is actually connected to a Cisco WS-C2960S-24TD-L switch and hosting 75 VPS (KVM) on local filesystem.

Now, I need to upgrade the cluster storage with a SAN or a NAS with at least 20TB of space. I would like to learn from anyone who done this before. What are the best practices and what would you do/buy/build with this money ? The IO performance is very important in our case.

I was thinking about QNAP, but not sure if they have unit with RAID hardware...

Thanks! :p
 
I'd be calling companies that do that sort of stuff - Dell for example will probably assign a rep if you mention your budget is that high.

Depending on where you're located and which companies you talk to, some may even be able to offer a demo unit to trial for a week or 2.
 
Depending on what you want to do, have you considered a chassis with 36 - 44 hard drives and mounting a couple of high performance raid cards such as the LSI megaraid 1GB cache?

if you want 20TB of disk space then with 600GB 15k rpm drive you would need 34 drives. So a 36 bay chassis should be enough. However if you want 20TB effective storage in raid 10 you may need twice as much bays approx.

The question is how you would connect your other servers to this storage if you do not plan to run the server directly on this chassis.. you may consider fiber channel or so….

Take a look at this: http://www.supermicro.com/products/chassis/4U/847/SC847A-R1400LP.cfm36
 
Well, if you have 30k$ Budget, there comes the question : "make or buy ?"
I was in a similar situation as you and I decided for "buy"
If you buy, you get certified and supported components for your infrastructure. it cost maybe a bit more, but if you've got troubles with the HW, you may a stressless happy admin :)
cheers
tom
 
Good point, as long as what is bought expensively can deliver. I find that the old expensive hardware sometimes have not caught up with the little high performing pieces of hardware today.

I'd recommend benchmarking both if at all possible then decide.

A small example of new technology but compared between the high end and the average consumer ones:

I recently bought the Enterprise Z-drive R4 which is blazing fast SSD PCIe and I love it.
it cost about $4200 for 600GB and claims to do up to 2GB/s max read/write.

While I love it, I also bought LSI-9265-8i with 8 SSD Vertex 3 MAX IOPS in RAID10. (After being disappointed with the performance of my ex-love Adaptec 6805 and its inability to saturate the 2-4GB/s that they claim they can reach.)
Cost: LSI + fastpath = approx $1,000
8 x 120GB Vertex 3 MaxIOPS = approx 250 x 8 = $2,000

Guess what, LSI + Fastpath (SSD acceleration hardware key) outperformed the Z-Drive R4 in most benchmarks except for a fews and cost less money.
 
Well, if you have 30k$ Budget, there comes the question : "make or buy ?"
I was in a similar situation as you and I decided for "buy"
If you buy, you get certified and supported components for your infrastructure. it cost maybe a bit more, but if you've got troubles with the HW, you may a stressless happy admin :)
cheers
tom

For my situation, buy is a must, for sure! We are a startup and can't afford stress and long downtime if something happen. From this, we have Dell, HP, Supermicro, QNAP... what else ?

Things to consider, need fiber channel and IO performance is a must!
 
I think you should elaborate first a bit on what architecture you want.
Is it a multi-server accessing the shared storage or a powerful single server access that storage?

What do you want to achieve (what are the technical app requirements)?
 
I think you should elaborate first a bit on what architecture you want.
Is it a multi-server accessing the shared storage or a powerful single server access that storage?

What do you want to achieve (what are the technical app requirements)?

Read the very first post of this thread, it's for VPS hosting (KVM), everything is explained. :)
 
Hello,

We have 30K$ budget to expand a current setup and I'm looking for advices about the storage. Right now we have 4 servers working as a promox VE 1.9.x cluster with the following specs :

1U SuperMicro Barebone
Dual Intel Xeon X5672 Processors
48GB RAM
4 X 320GB 15K
Adaptec 6504 Raid-10
Dual 1GiG Ethernet

The cluster is actually connected to a Cisco WS-C2960S-24TD-L switch and hosting 75 VPS (KVM) on local filesystem.

Now, I need to upgrade the cluster storage with a SAN or a NAS with at least 20TB of space. I would like to learn from anyone who done this before. What are the best practices and what would you do/buy/build with this money ? The IO performance is very important in our case.

I was thinking about QNAP, but not sure if they have unit with RAID hardware...

Thanks! :p
Hi Simon,
I have some FC-Raids (Sun, HDS, ...), and the IO-Speed differ a lot. First you need to decide which kind of storage you need. I prefer for system-disk (VM), DB... fast SAS-Drives in raid10 in the FC-Raids and Sata for storage purpose (raid-6 or raid10).
Often single-controller-Raids are faster than dual-controller-raids (because they need to sync the internal cache also). To avoid an SPOF you can use DRBD, but create an much more complex setup.
I have good experiences with Areca based Raid-Controller - prizing is ok and performance is good. Like this http://eurostor.eu/en/products/raid-fc-host/es-6600-fcsas.html

Udo
 
Like Udo I recommend DRBD if you need to avoid SPOF.
I also like Areca RAID cards, fifteen 1880iX's with 4GB cache have served me well.
The 1882's look pretty awesome with their dual core CPUs, wish I had some of those!

We have been building all of our servers in Pairs since we use DRBD.
Typical server setup is 12 250G 7200RPM disks(enterprise class), hot swap, 6-8 core cpu, 16-24GB RAM, Areca RAID card with 4GB cache and battery, 3U or 4U case.
Recently we started using super cheap used 10G Infiniband cards for DRBD replication.
Usually end up around $4k US per server, about 65% of the cost is directly related to disk IO.
 
OK guys, here is the final setup :

1x HP StorageWorks SAN Smart Array P2000 G3 FC/iSCSI w/ 12x300GB SAS 15K SSF
1x HP Proliant DL385 G7 Server 2x Opteron 6238 w/ 24GB RAM & HDD (+ our currents SuperMicro nodes (4x))
2x Cisco Switch WS-C2960S-24TS-L (4x SFP)
+ APC UPS/PDU Kit

All of this for less than 30K$ NET. I will post updates on the setup and follow up on benchmark and testing.
 
Most likely. But I doubt you'll be able to upgrade the OS from Debian Lenny to Squeeze without doing a fresh install.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!