Recommended setup

Discussion in 'Proxmox VE: Installation and configuration' started by Bobbbb, Jul 13, 2018.

  1. Bobbbb

    Bobbbb New Member

    Joined:
    Jul 13, 2018
    Messages:
    13
    Likes Received:
    0
    Hi everyone,

    I currently have 1 server, running about 20 sites.
    I would like the change that to a proxmox server, and virtual the existing server and create some kind of HA.

    I currently have 2 servers with 4 x 480 ssd drives.

    my plan is , RAID 5 for proxmox and Ceph with the additional drive.
    once thats done, i will get (or use an existing ..) smaller server and allocate a drive for the ceph, and in future perhaps add more.

    is that a good setup?i believe HA is currently not an option with ZFS and i am required to use ceph.

    any other ideas are also welcome!
    thanks a lot,
    Bob
     
  2. Bobbbb

    Bobbbb New Member

    Joined:
    Jul 13, 2018
    Messages:
    13
    Likes Received:
    0
    or is raid not even required because in case of a drive failure HA will just fail over to another cluster?
     
  3. alexskysilk

    alexskysilk Active Member
    Proxmox VE Subscriber

    Joined:
    Oct 16, 2015
    Messages:
    417
    Likes Received:
    46
    Some notes regarding your config:

    1. you need a minimum of 3 nodes for ceph.
    2. ceph with one drive is not going to give you useful results.
    3. parity raid is not well suited for virtual disks; it will work but performance may not be adequate.
    4. RAID5 is dangerous. When a drive fails you will be operating without parity, which leaves you exposed to data corruption untl the disk is replaced and rebuilt. RAID10 would function better at the cost of 480GB less usable space then raid5.
     
  4. Bobbbb

    Bobbbb New Member

    Joined:
    Jul 13, 2018
    Messages:
    13
    Likes Received:
    0
    THANKS FOR THE REPLY!

    1) my end goal is to have 3 nodes with 3 ceph, (and maybe more after that...)
    i just need to get started.
    currently i have 2 servers with 4 x 480gb SSD so need to decide on the best configuration to get started.

    I agree that raid 10 is always best practice, but (and please correct me if i am wrong...:))
    i will not be able to have proxmox and ceph with 4 drives and raid 10?
    2) isn't parity good enough, considering that i'll have a 3 node cluster?

    EDIT: unless i do raid 1 for the proxmox, and the rest without raid?
    surely its ok if the ceph drives are not raided?
     
  5. alexskysilk

    alexskysilk Active Member
    Proxmox VE Subscriber

    Joined:
    Oct 16, 2015
    Messages:
    417
    Likes Received:
    46
    you would not normally have a raid backed vdisk store AND ceph backed. pick one.

    If you intent to roll out ceph, 3 nodes is where you start.

    You can have a RAID10 volume with any multiple of 2 drives. Some controllers will allow you to make a raid1e volume with any number of drives, which is similar to the way ceph does replication groups.

    Let me make it simpler. parity and replication are two methods of creating fault tolerance. Whats "good enough" is dependent on your tolerance for performance, downtime and/or data loss. If you'll deploy a ceph pool, it will come down to dual or triple replication.

    Exactly.
     
  6. Bobbbb

    Bobbbb New Member

    Joined:
    Jul 13, 2018
    Messages:
    13
    Likes Received:
    0
    thanks again.

    stupid question, is ceph = object storage solution that many companies offer?
    looking at getting some space from digital ocean.
     
  7. Bobbbb

    Bobbbb New Member

    Joined:
    Jul 13, 2018
    Messages:
    13
    Likes Received:
    0
    anyone knows?
    Can i add digital ocean spaces or amazon S3 storage as external RBD?
     
  8. alexskysilk

    alexskysilk Active Member
    Proxmox VE Subscriber

    Joined:
    Oct 16, 2015
    Messages:
    417
    Likes Received:
    46
    https://ceph.com

    Ceph can do object storage but that is only one of its interface capabilities.

    Not easily. anything is possible with enough persistence- but dont expect it to have useful (performant) application.
     
  9. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87
    And I guess that most of this sites have some kind of DB ?
     
  10. Bobbbb

    Bobbbb New Member

    Joined:
    Jul 13, 2018
    Messages:
    13
    Likes Received:
    0
    Yes that is correct.
    what is the recommendation regarding the DB?

    My 3rd proxmox node, will be at another location, so will have 2 proxmox nodes at the same data canter and a 3rd one in another country.
    will that be ok for ceph?

    once everything is moved to the new cluster , i will add another node.
     
  11. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87
    I do not use ceph, but I think that it will be a problem to use a 3rd node in another DC/country, because you will need a lot of bandwidth(for ceph), and PMX cluster need a very good latency. If this is your usage scenario, I can guess that it will not work. It could work with async replication data from your DC(PMX cluster) -> remote DC(PMX another cluster or non-cluster standalone node). So in the remote DC will be a remote DC disaster recover. But if you need to write data in both DC at the same time, you will need to use some kind of sql multi-node cluster(like percona mysql cluster).
     
  12. Bobbbb

    Bobbbb New Member

    Joined:
    Jul 13, 2018
    Messages:
    13
    Likes Received:
    0
    Yes I was thinking about using percona (we actually currently using it), just had some bad experience with it (although the environment wasn't very stable).

    My plan is that the DC's that are in the same DC will fail over to each other, with fast LAN between them, only in the very unlikely event that both of them will die, the 3rd will be used.
     
  13. Bobbbb

    Bobbbb New Member

    Joined:
    Jul 13, 2018
    Messages:
    13
    Likes Received:
    0
    are there any other storage solutions that can work other then ceph?
    my original plan was to use ZFS but i see HA is not supported.
     
  14. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87
    I run percona-cluster over zfs(at least for 3 years) and I can say that for me is rock solid.
     
  15. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87
    You can use zfs if it is OK to have async replication via zfs(shedule for each 5 min) from node X1 -> node X2. And with percona-cluster on both nodes/VMs you will not lose (almost)any data. You will need to use a moniroring tool like monit, who will start VMx using the last snapshot who was replicated(less than 2 min).

    Another option is to use lizardfs(ditributed and/or replicated) - now I am evaluate this tool. lizardfs could be used in a HA enviroment under PMX.
     
  16. Bobbbb

    Bobbbb New Member

    Joined:
    Jul 13, 2018
    Messages:
    13
    Likes Received:
    0
    the monit will run on the proxmox host?

    so i like your idea....but i am also confused now.
    lol

    do you recommend zfs over ceph based on my requirements and what i have available?
     
  17. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87
    Yes.

    VMx(node1)---- >zfs replicate ----> VMx(node2/monit)

    VMx(node1) is up and running, VMx(node2) is down(because they have the same ID). Monit check at each 90 seconds(or whatever do you want) if VMx(node1) is up. When VMx(node1) is down, monit it will change VMx(node2) ID, and the name of the vHDD(using the changed ID), then it will start this new VMx on node2 with a new ID, with the data after the last successful zfs sync. It could take less then 2 min the all process.
     
    #17 guletz, Jul 18, 2018
    Last edited: Jul 18, 2018
  18. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87
    I do not use ceph, so I can not recomanded if the ceph is OK for your case or not
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice