Ceph pool tweaking....

Discussion in 'Proxmox VE: Installation and configuration' started by Vassilis Kipouros, Jan 25, 2019.

Tags:
  1. Vassilis Kipouros

    Joined:
    Nov 2, 2016
    Messages:
    46
    Likes Received:
    3
    I have managed to succesfully deploy a test cluster over three sites
    connected with 100mbit fiber.

    All three nodes have 3 osds each, and there is the default pools for cephfs-data and cephfs-metadata.

    The performance of this one pool stretched across the 100mbit links is low but acceptable for my test case.

    So my question is if its possible create 4 pools,
    - one that stretches across all OSD's (and is slow) and
    - three pools that will be local to each node utilizing the local OSD's?

    How do I do it? Can someone point me to the right direction or give a simple example?

    Thank you in advance...
     
  2. Proxmox India

    Proxmox India New Member

    Joined:
    Oct 16, 2017
    Messages:
    29
    Likes Received:
    3
    you will need to edit the crush map manually. just google around and you will get it.
     
  3. Vassilis Kipouros

    Joined:
    Nov 2, 2016
    Messages:
    46
    Likes Received:
    3
    Been reading a lot on Ceph documentation but looks confusing.
    I was hoping for some Proxmox specific instructions because I've broken
    my previous cluster by manually doing ceph commands.
     
  4. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,329
    Likes Received:
    207
    This just calls for a disaster to happen!

    On recovery (usually this is one reason, why you have Ceph in the first place) the data movement will max out your bandwidth. Further, Ceph has no locality, this means a Ceph client will try to reach any OSD that is primary for a PG, irregardless of locality. The three MONs will lose quorum when IO needs rise.

    While having a pool only on one node works, it will not be fast and clients still need to connect to a MON (possibly not the local one). The local IO may also amplify latency or recovery issues for the rest of the cluster. This is better suited for a hardware/software RAID controller.

    TL;TR
    Don't run a Ceph cluster over low bandwidth/high latency networks. Create separate clusters and use rbd-mirror to replicate data. Or as an alternative use our storage replication (pvesr).
    https://pve.proxmox.com/pve-docs/chapter-pvesr.html
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. Vassilis Kipouros

    Joined:
    Nov 2, 2016
    Messages:
    46
    Likes Received:
    3
    Thank you for your reply Alwin.

    Is it possible to have 3 nodes per site in 3 sites all clustered with proxmox
    and create local seperate ceph cluster on each site? managed by proxmox?
     
  6. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,329
    Likes Received:
    207
    In regards to your described environment, no. Corosync has the similar latency requirements as Ceph.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. Vassilis Kipouros

    Joined:
    Nov 2, 2016
    Messages:
    46
    Likes Received:
    3
    Without using ceph, my current described environment works without a hitch (proxmox wise)
    I can fully manage the vms and cts on each site without a problem.

    So the question remains, can a proxmox cluster manage multiple ceph clusters?

    Or should I isolate the proxmox clusters per site?
    Can we have multiple datacenters on the proxmox gui?
     
  8. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,329
    Likes Received:
    207
    My earlier comment can be extended to Proxmox VE clusters (corosync) and is not limited to the storage part.

    No.

    Yes.

    No.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice