Virtualization planning, your input is appreciated!

  • Thread starter Thread starter boca
  • Start date Start date
B

boca

Guest
Hi all,

I'm in the process of planning our future virtualization environment and after some preliminary testing with Proxmox I feel confident that this solution will provide us with what we need.

I have come to a point where I'd like to assemble a list of items that we'll need to purchase which will facilitate the following requirements:

- 100 users
- 5 VMs: 1 Active Directory, 2 SharePoint, 3 Terminal Server and MYOB, 4 Proxy and 5 File Sharing host.
- Workload: At the most, 20 concurrent connections to SharePoint, 3-4 concurrent connections to MYOB, random connections to File Sharing but low load.
Along these above listed items we’ll need room for around 10 basic Windows 7 VMs to be used for testing and client support. They will be used as a VPN platform to different network environments.

My goal here is to create an environment that will be flexible for future growth and would facilitate a decent level of High Availability. When I say HA, I mean, easy transfer of VM from broken host to another. Automatic failover would be great, but manual failover is sufficient for us at this point. I plan to achieve this by utilizing centralized storage and with SAN being out of our budget, NFS (NAS) storage will do the trick(?).

Now, my question with all of this is:
What do I need to look out for when planning such setups?

With above listed workload, what would be the minimum (safe) network connection between hosts/node to the centralized storage? I was thinking 10GB to facilitate for future nodes and expansion – this would be a separate from the public network.

Do you have any hardware (switches, NAS, Raid cards, etc) recommendations that would suit our needs and would happily work with Proxmox?

Any suggestions, recommendations or ideas are more than welcome! I’m a one man shop here and any outside input is greatly appreciated!

Thank you!
 
I do not know what sort of workload the items you mentioned would be, but I can share my opinions and experiences with you.

If you use NFS on a NAS for storage you have a single point of failure. (SPOF)
Performance might also be an issue with such a setup.

SANs cost to much, I agree with you there.

I am a big fan of DRBD.
Works ok over 1 Gig ethernet, little better with two bonded 1Gig ports and very well over 10G connections.
DRBD works with Dolphin, 10G Ethernet and Infiniband(using IPoIB, SDP does not work on Proxmox)
We purchased a bunch of used Infiniband switches and cards from ebay for super cheap prices and use IPoIB for DRBD and Proxmox 2.0 cluster communications.

Two servers plus DRBD and you have 100% redundancy.
Read the wiki and use two DRBD volumes as suggested: http://pve.proxmox.com/wiki/DRBD
With Proxmox 2.0 you can even setup automatic failover of VMs!

Disk IO is usually the bottleneck, so a good RAID card with battery backed cache and fast disks is always very helpful.
I like Areca cards, I have many 1880ix-12 cards, the new 1882 series look great but I do not have one yet.
Here are some benchmarks I posted earlier today: http://forum.proxmox.com/threads/8540-new-fedora-virtio-win-0-1-22-iso-drive-(february-2012)

RAM is cheap, get as much as you can afford, you will never regret having too much.

I would suggest looking through the 2.0 forums for ideas too: http://forum.proxmox.com/forums/16-Proxmox-VE-2-0-beta
2.0 is currently at RC1 status and is entirely different than the 1.x series so knowing what 2.0 is all about may help you with purchasing decisions.