common Ceph between two clusters

Discussion in 'Proxmox VE: Installation and configuration' started by Spiros Papageorgiou, Aug 12, 2018.

Tags:
  1. Spiros Papageorgiou

    Joined:
    Aug 1, 2017
    Messages:
    40
    Likes Received:
    0
    Hi all,

    Is it possible to have one Ceph deployment over two separate proxmox clusters (same version)?

    My case is that I have two different clusters (totally different, even the hw vendor is different) that are used for different purposes and each node on each cluster has few disks (SSDs and SAS) that I would like to use under a single ceph deployment. Each cluster has enough disks for the most basic ceph deployment, so I would have a much better resiliency and space if I could merge the resources, in one ceph cluster.

    Could I do this?

    Thanx,
    sp
     
  2. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,512
    Likes Received:
    131
    I would argue that this is less a technical more a 'separation of concerns' question. With resources of both clusters involved, you need to take more concern on failed hardware (failure domain shifts) or maintenance. Since then you would treat the two clusters as a single one, you could join them together for any other resource too. Also keep in mind that also Ceph work better with a homogene hardware setup.
     
  3. Spiros Papageorgiou

    Joined:
    Aug 1, 2017
    Messages:
    40
    Likes Received:
    0
    Hi Alwin,

    Thanx for the answer.
    My HW is indeed homegeneous. The two clusters both have the same CPUs, 10G dedicated for Ceph networks and use SSDs of the same class.
    The two clusters serve different business needs but they are managed by the same team.

    Increasing the number of nodes, and the number of OSDs, sounds like a good idea, in order to reach a ceph cluster that is more failure prone and faster to recover. I understand that the approach might have problems (the CEPH traffic between the two clusters, will be routed), but everything will be low latency and redundant with low failure recovery times). I also plan to take into consideration the two cluster situation and organize my crush rules, accordingly.

    Anyway, technical speaking, I have a few questions:
    - Do i need to add to public network the other's cluster, public network?
    - Do i need to add to cluster network the other's cluster, cluster network?
    - Do the two cluster networks, need to communicate between them? (I guess yes)
    - Do the two public networks, need to communicate between them? (I guess not)
    - If I also add the monitors from each cluster to the other cluster, would this be all required for the two clusters to work as one?
    - What about keyrings and security?

    Thanx,
    Sp
     
  4. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    1,512
    Likes Received:
    131
    To migrate into one big cluster, you would need to move all data onto the other cluster and disassemble (easiest - new install) the old cluster to put the nodes into the remaining one. PVE nodes need to be joined to the remaining corosync cluster and for ceph each cluster uses a different fsid and set of keys. The public/cluster Ceph networks can be routed.
    http://docs.ceph.com/docs/luminous/rados/configuration/network-config-ref/

    For Ceph only, the above applies, but only to the Ceph services, not the whole PVE installation.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice