Growing the Ceph and Proxmox clusters

ibravo

Member
Sep 8, 2020
4
0
21
51
We've deployed a cluster of 4 nodes with Proxmox and Ceph, with both SSD and HDD devices. The cluster is performing as expected and we are happy with the configuration, but want to make the recovery a bit more robust in case of a major failure. We are also finding out that 3 PVE nodes should be more than enough for our use cases, so will need to remove that 4th node from the cluster. The documentation specify this procedure, so we should be good on that front.

The items that we need to change and that we need help on how to accomplish them are the following:

  1. The cluster backplane is 1x10gb nic card. Originally deployed the heartbeat, public and ceph private networks on the same IP address. We then managed to split the Ceph private IP into a VLAN of the same NIC, and we might also move it to a second 10Gb nic once we finalize the physical cluster sizing.
What we want to do now is to split the public IP from the cluster heartbeat traffic, and I couldn't find any documentation about it. Can someone share some light on how to do it on a running cluster?


2. We want to increase the number of Ceph nodes, to get more distributed capacity in addition to the PVE cluster servers. How shall we go about adding these additional nodes to the Ceph cluster? Note that we want these machines to be out of the PVE cluster itself. Just Ceph nodes.​

Thanks,
IB
 
What we want to do now is to split the public IP from the cluster heartbeat traffic, and I couldn't find any documentation about it. Can someone share some light on how to do it on a running cluster?
Cant do that with the cluster having any connections. the procedure would be to stop ALL guest connections, change the public interfaces in ceph.conf, and then restart all monitors/mgrs/osds/etc. probably simplest to reboot all nodes.

DO NOT TRY TO RUN ANY GUESTS until all nodes have been processed.

how shall we go about adding these additional nodes to the Ceph cluster?
just add them as you normally would (https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_join_node_to_cluster), run pveceph install, pveceph init, and pveceph createosd the new drives.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!