Looking to build a Proxmox cluster

tycoonbob

Member
Aug 25, 2014
67
0
6
Hi everyone. I've known about Proxmox for a while now, but haven't spent much time working with it. About two weeks ago I decided to make the switch to KVM (from Hyper-V) for my home resources, and am looking for some assistance in planning a new setup.

Currently I have 2 Dell R610 servers (dual Xeon L5520, 24GB RAM), each with a 60GB SSD (Server 2012 R2 with Hyper-V role), and a 512GB SSD for VM storage. I am using Hyper-V replica to replicate VM's across each other, so I'm not using any HA, but do have failover.

My setup is working great, and has for a long time (Hyper-V on Server 2008 R2 prior to server 2012/R2), but my workload has shifted to primarily being Linux. Currently I run 22 VM's, 16 of those being CentOS based. The rest are Server 2012R2 and Windows 8.1. I'm also interested in utilizing OpenVZ which I would think would allow me to better utilize my resources.

I also have a custom storage server (Norco RPC-4224 chassis, Xeon E3-1220v2, 16GB RAM, LSI MegaRAID 9261-8i controller, HP SAS Expander, quad gigabit NICs) with 10 3TB drives in RAID10, for ~14TB usable storage. This contains all my data, and currently isn't used to provide VM storage.

1) What would be the best setup for me with Proxmox cluster? I like the idea of high availability, and could definitely use iSCSI for VM storage, although I don't want to. I know little about Ceph (seeing that it's now supported) and DRBD, but I'm guessing I would need a third server before I could have reliable HA. Is that right?

2) So, let's say I have 3 identical servers, and each server had a 60GB SSD for Proxmox installation, and a 512GB SSD for VM storage. If I use Ceph, am I going to end up with only 512GB usable storage (i.e., 3-way mirror of my data)?

3) If using Ceph, would I have true HA with live migration, or would the VM go down and have to be brought up on a different host?

4) Each of my R610's have quad gigabit NICs, with 2 free PCIe slots. If I clustered and used Ceph, would it be best to add a quad gigabit NIC PCIe card and use 4 NICs for Ceph only, and the other 4 NICs in LACP for Proxmox? I have iDRAC6 enterprise in these so I have a dedicated management NIC.

5) I've not really used OpenVZ before, and am curious how this works with Proxmox. Since Proxmox is Debian based, would all my OpenVZ instances also be Debian based or would I have like a KVM instance running CentOS (for example), and then have OpenVZ instances that share the kernel from that KVM instance?


I think that's all my questions for now. I spent the past 2 weeks testing out OpenStack, CloudStack, OpenNebula, as well as more simple things like WebVirtMgr and Archipel, but it seems Proxmox will give me what I want straight out of the box (with OpenVZ as a bonus).

Thanks!
 
Ad 5) As long as the distro which is supposed to be running openvz can use the host's kernel you are game. Since Proxmox uses a stock RHEL 6.x kernel (added openvz patches) your CentOS VM's wouldn't notice a thing.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!