Planning PoC Proxmox cluster, need advice...

Will Dennis

Member
Oct 10, 2016
10
0
21
59
Princeton NJ
Hi all,

Our company is looking for a way to provide VMs and LXC containers to our research staff, and Proxmox looks like it may just be the perfect fit... So I am tasked with taking some spare servers and spinning up a PoC. This rig doesn't have to be very performant, just good enough to demonstrate the principles and do some testing with.

So, what I want to end up with is a 3-node HA cluster using shared storage. The three nodes I have to use were originally sold as "storage servers", each has the following specs:
- CPU: (2) Intel Xeon X5450 @ 3.00GHz (total 8 cores)
- RAM: 24 GB
- Disk: (1) 1TB RAID-1 array (Proxmox OS installed here); (2) 2TB hdd's; (8) 1TB hdd's.
- NICs: (4) 1Gbps

I want to do a "hyperconverged" setup, wherein the shared storage is provided on the hypervisor nodes. (I previously did this sort of setup using oVirt on these nodes, with a Gluster shared filesystem spread across the nodes.) In this case, was thinking of using Ceph as the basis of shared storage (from what I have been reading here, as far as for Proxmox use, Ceph is more commonly used for distributed storage over Gluster, correct?) I know that the lack of 10Gbps network infra is non-optimal as far a Ceph usage, but like I said, this is just a PoC rig, which if the concept proves itself, then we can acquire better hardware for the purpose.

Which brings me to my ask: Is there any "planning guides" etc available for doing this sort of thing? (HA with Ceph shared storage on the same nodes) I have found this on the wiki: https://pve.proxmox.com/wiki/Ceph_Server but I don't know yet (proxmox newb here, just digging in to all this now) if there's a broader overview available on how to lay out the hardware & networking, and then installation and configuration of Proxmox to support this scenario.

Thanks for any info provided,
Will
 
Yes, I think I quoted that URL above in my post...
You did, sorry, missed it;-)
For your usecase I would recommend:
- Separate network (vlan) for cluster communication
- Separate network for VM/CT. Several vlans if appropriate
Above should be in a bonded net. Eg. 4 x 1 Gb.
Either
- 10 Gb ethernet in separate network for Ceph. Dual 10 Gb in bond
or
- 40 - 80 Gb infiniband in separate network for Ceph. Dual 40 - 80 Gb in bond (Infiniband supports active/passive only)

For recommendations for Ceph the general opinion is the more disks the merrier and SSD is superior to HDD.
 
the first order of business would be to acquaint yourself with Ceph in a practical sense, especially considering the low end nature of the cluster. Ceph wants a minimum number of OSDs for good performance, and if your old cores are busy processing OSD hashing they may not give you sufficient available performance for your VMs, and everything suffers. too few OSDs= storage performance will be poor and your POC will not yield positive results.

The other matter is defining acceptable performance and fault tolerance. Your use case may be ok for the performance such a cluster would provide. Having clear goals will help you match what you can achieve with what you expect; Its completely possible to have a working cluster with your hardware.

read through posts on this forum as well as Ceph's own documentation to get some ideas for what you'll need to have a successful implementation.
 
You did, sorry, missed it;-)
No problem :)

For your usecase I would recommend:
- Separate network (vlan) for cluster communication
- Separate network for VM/CT. Several vlans if appropriate
Above should be in a bonded net. Eg. 4 x 1 Gb.
Either
- 10 Gb ethernet in separate network for Ceph. Dual 10 Gb in bond
or
- 40 - 80 Gb infiniband in separate network for Ceph. Dual 40 - 80 Gb in bond (Infiniband supports active/passive only)

For recommendations for Ceph the general opinion is the more disks the merrier and SSD is superior to HDD.
Sounds like good advice for if/when I go ahead with more production-grade gear, but for now, i'm stuck with the servers I have on hand (as detailed above)... I have only 4 x 1Gbps NICs available; one has been used for the "management interface" of each PVE node, so that leaves 3 to use. I do have a Cisco 48x1G switch to use for the other needed interconnections, where I can set up needed VLANs etc.

A few questions:
1) is the "cluster communication" vlan a separate network from the network used to manage the nodes via the web UI, or are those co-mingled?
2) For the "VM/CT" network, I was going to use a separate NIC for that and utilize VLAN tags on the interface, which would be a trunk to the Cisco switch; any NIC config instructions for that anywhere?
3) Finally, I was going to bond the remaining two NICs into a 2x1Gb channel to the Cisco, and set that up on a non-routed VLAN for the Ceph node interconnects. That sound OK?

Thanks for your kind responses!
 
1) is the "cluster communication" vlan a separate network from the network used to manage the nodes via the web UI, or are those co-mingled?
It can be the same nic but use separate vlans (different network)
2) For the "VM/CT" network, I was going to use a separate NIC for that and utilize VLAN tags on the interface, which would be a trunk to the Cisco switch; any NIC config instructions for that anywhere?
Simplest way is to create a linux-bridge on the interface and the create vlans on top of this bridge as needed.
3) Finally, I was going to bond the remaining two NICs into a 2x1Gb channel to the Cisco, and set that up on a non-routed VLAN for the Ceph node interconnects. That sound OK?
For testing purposes it will be ok, but don't expect to make any performance record. In production 10 Gb is absolute minimum.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!