Proxmox 3 Node Cluster with Ceph, Networking

Feb 29, 2016
15
0
1
50
Hello


I need to set up a 3 node HA cluster using Proxmox. This will use local disks in each of the nodes for a Ceph cluster as well. I am going to use VE 4.2.

So my question is really about the preferred method to do the networking for this. I have a layer 3 Cisco SG300-52 switch and will want to configure several VLANs both for running Proxmox/Ceph and for the virtual machines.

Thoughts so far are that I need the following separate networks for Proxmox/Ceph

1. Proxmox Management (2 x NICs)
2. Proxmox Cluster communication (2 x Nics)
3. Ceph replication (6 x NICs)
4. All other VLANs for VMs (4 x NICs)

There seems to be two schools of thought for this when people create hypervisors and clusters.


Would you bond all the NICs together, on switch and Proxmox, and then share everything out internally via OpenvSwitch? Using some sort of bandwidth management to control how much each VLAN, or collection of VLANs, can have within OpenvSwitch.

or

Would you create separate linux bonds for the 3 Proxmox/Ceph VLANs listed above, then use an OpenvSwitch for other bond for the VLANs for the VMs?


Obviously the first option gives much greater resilience and you can lose many NICs and the whole system keeps running, even though it may be slowed down. I have done this before in Hyper-V but am totally new to OpenvSwitch and have limited exposure to Proxmox running just a single server so far with no VLANs.

With the second option it gives you failover for management and cluster communication but if you lost both NICs in each bond then you are stuck.


I would appreciate peoples thoughts on this. And if anyone has any links to to some walkthroughs for this I would be grateful if you could post them please?


Many Thanks
 
Would you create separate linux bonds for the 3 Proxmox/Ceph VLANs listed above, then use an OpenvSwitch for other bond for the VLANs for the VMs?

That one as quoted above, for sure!
Obviously the first option gives much greater resilience and you can lose many NICs and the whole system keeps running, ....

Do you really expect to lose many NICs at the same time? IMHU the danger of negative interference from, let's say, CEPH LAN to cluster LAN is higher than the 2 independent NICs for cluster LAN fail at the same time.
 
Do you really expect to lose many NICs at the same time? IMHU the danger of negative interference from, let's say, CEPH LAN to cluster LAN is higher than the 2 independent NICs for cluster LAN fail at the same time.

Well the servers need to go into a Production environment so I need to do my best to put in 100% uptime as I can.

There are 3 x 4 port PCIe NICS. So If I lose a card I could lost 4 ports in one go.
I will hopefully be using two physical switches stacked as one logical switch, allowing me to spread the network bonds across the two of them. In which case if a switch went down I could lose half of the NICs.

The other thing as well is do I need two VLANs/Networks for Ceph? One for its replication and then one for the VMs/Proxmox to access it? None of this is very clear in the documentation at all.

I have still had absolutely zero feedback either from anyone at Proxmox, which is extremely disappointing seeing as we have purchased subscriptions as well.
 
Your question is a bit too general, better ask simpler questions to the point.

And you should also contact your network admin and/or your switch documentation for the perfect setup for your needs.

For Ceph network docs, check http://docs.ceph.com/docs/master/
 
Hy,
1. Proxmox Management (2 x NICs)
2. Proxmox Cluster communication (2 x Nics)
3. Ceph replication (6 x NICs)
4. All other VLANs for VMs (4 x NICs)
Wow... 14 x Gig NIC's.
Why you not using 10Gbit Nic's ?
 
Your question is a bit too general, better ask simpler questions to the point.

And you should also contact your network admin and/or your switch documentation for the perfect setup for your needs.

For Ceph network docs, check http://docs.ceph.com/docs/master/

The question really isn't general at all. And I am the network admin, as well as system admin, and I know how to set up the switches too. What I need is some assistance with is Proxmox please?
 
The question really isn't general at all. And I am the network admin, as well as system admin, and I know how to set up the switches too. What I need is some assistance with is Proxmox please?

Yes, what is the question? Seems I did not get from your first post.

I never design a Ceph network with 1 Gbit NICs, this is just to slow and not recommended.
 
The question really isn't general at all. And I am the network admin, as well as system admin, and I know how to set up the switches too. What I need is some assistance with is Proxmox please?

I would recommend you forgo the need for a 10GB switch via directly linking the machines if your cluster is only 3 nodes. You would need 3 Intel x520-DA2's which can be had on eBay for ~$100 per device. This would enable you to set up a static ceph network without a switch and have the advantage of 10GB. I do this with DRBD and it works fine until I get a split brain, but that is another story.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!