Ceph Planning

Discussion in 'Proxmox VE: Installation and configuration' started by adamb, Feb 13, 2019.

  1. adamb

    adamb Member
    Proxmox Subscriber

    Joined:
    Mar 1, 2012
    Messages:
    980
    Likes Received:
    22
    We need some cheap bulk data storage with snapshot functionality that can scale easily. The data residing on this storage will basically be written and forgotten about. It would be read minimally.

    I was leaning towards a basic 3 node ceph cluster with either erasure coding or 2 copies. We would also have a identical ceph cluster in our other data center which we would replicate to. We have a 40GB dark fiber connection between the two data centers.

    I am leaning towards the following in each node.

    CPU: E5-1620
    Ram: 64G
    8x4TB Spinning rust
    2xIntel DC 3710's for Journal's (Thinking 400G models)
    2xIntel DC 3610 for OS
    Redundant 10G network

    Do you guys think the above hardware would be solid?

    We already have a separate cluster with proxmox front ends, so the above nodes would be specifically for ceph and this bulk data.
     
  2. elurex

    elurex Member
    Proxmox Subscriber

    Joined:
    Oct 28, 2015
    Messages:
    149
    Likes Received:
    4
    for bluestore ceph storage... sata ssd as journal/wal/db are not fast enough. It is best to use nvme ssd.
    .
    keep in mind that firestore are phasing out
     
  3. adamb

    adamb Member
    Proxmox Subscriber

    Joined:
    Mar 1, 2012
    Messages:
    980
    Likes Received:
    22
    Yea but I am not really looking for high performance. I just need bulk storage with ok performance that can scale, has snapshot capabilities and is redundant.
     
  4. adamb

    adamb Member
    Proxmox Subscriber

    Joined:
    Mar 1, 2012
    Messages:
    980
    Likes Received:
    22
    Im almost wondering if using SSD's for journals is overkill as well for my scenario.
     
  5. sg90

    sg90 Member

    Joined:
    Sep 21, 2018
    Messages:
    49
    Likes Received:
    6
    If your just looking for archive storage and not high performance forget the SSD’s and just collocate the WAL/DB on the same hard disk as the OSD. Your remove your single / double point of failure as a single disk failure will only effect one OSD compared to multiple OSD’s.

    With your requirements I doubt you would notice much difference in performance without a seperate WAL / DB disk.
     
  6. elurex

    elurex Member
    Proxmox Subscriber

    Joined:
    Oct 28, 2015
    Messages:
    149
    Likes Received:
    4
    wouldn't local ZFS + replication serve your purpose?
     
  7. adamb

    adamb Member
    Proxmox Subscriber

    Joined:
    Mar 1, 2012
    Messages:
    980
    Likes Received:
    22
    If redundancy wasn't a factor, ZFS would do the job.
     
  8. adamb

    adamb Member
    Proxmox Subscriber

    Joined:
    Mar 1, 2012
    Messages:
    980
    Likes Received:
    22
    That is the same conclusion I have come to as well. Looks like its time to start testing. Thanks for the input!
     
  9. Craig St George

    Joined:
    Jul 31, 2018
    Messages:
    61
    Likes Received:
    7
    You don't need the ssd. I have lab setup with spinning disks and not even E5 just some 4core E5620 32gb memory it works fine . Slow but fine and it's using bluestore
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice