Proxmox VE Ceph Server released (beta)

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Jan 24, 2014.

  1. impire

    impire Member

    Joined:
    Jun 10, 2010
    Messages:
    106
    Likes Received:
    0
    Thank you. This sounds like a lot just to store the ISOs.

    - Can I just upload it to the local drive (boot drive) of each server? I can download the templates to the 'local'. But when I tried to upload an ISO, I get the error "Error: 500: can't activate storage 'local' on node 'server1' "

    - In the Datacenter -> Storage -> Add section. I see options to create Directory, LVM and such. Can I just create a directory and put the ISOs on it?

    I just want a simple method to store the ISO without having to go through the CephFS. Can it be done? Thank you in advance for your help.
     
    #101 impire, May 27, 2014
    Last edited: May 27, 2014
  2. impire

    impire Member

    Joined:
    Jun 10, 2010
    Messages:
    106
    Likes Received:
    0
    I keep seeing the term "weight" being apply throughout this discussion. Can someone help clarify what exactly is "weight" in the Ceph/ProxMox setup?
     
  3. impire

    impire Member

    Joined:
    Jun 10, 2010
    Messages:
    106
    Likes Received:
    0
    Hello,

    Can any one help? I am sure I am not the only one with this question. I just want to be able to upload ISO without going through the CephFS? Can I just upload it to the local drive (boot)? Thanks in advance for your help.
     
  4. impire

    impire Member

    Joined:
    Jun 10, 2010
    Messages:
    106
    Likes Received:
    0
    I created a pool of 3 replicas on a 30TB. When I go to the datacenter and created the storage. It show me 30TB available space. Shouldn't it display 10TB as the available space?
     
  5. spirit

    spirit Well-Known Member

    Joined:
    Apr 2, 2010
    Messages:
    3,370
    Likes Received:
    140
    Yes, of course !

    you can also upload it to each proxmox node local storage, easier than cephfs ;)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,484
    Likes Received:
    314
    Not sure about that, because you can have different pools with different number of replicas. So that would just add confusion.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. symmcom

    symmcom Well-Known Member

    Joined:
    Oct 28, 2012
    Messages:
    1,075
    Likes Received:
    25
    Available space will always show the total space of CEPH cluster. Look at the used space. When u save a 1gb file you will see 3gb being used

    Sent from my SGH-I747M using Tapatalk
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. symmcom

    symmcom Well-Known Member

    Joined:
    Oct 28, 2012
    Messages:
    1,075
    Likes Received:
    25
    Local storage definitely easier than cephfs. But let's say if you wanted to play around with openvz or different image type; unless you have plenty of space on local node, cephfs will be much better option. Once it is setup it runs without tending much.


    Sent from my SGH-I747M using Tapatalk
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. felipe

    felipe Member

    Joined:
    Oct 28, 2013
    Messages:
    152
    Likes Received:
    1
    i want to use different pools for my sata and ssd disks. (one big pool for slow sata data and one small for fast ssd)
    as i will use it within the same servers i will need to have different root buckets and then make an ssd and an sata host for each real host. or is there any simpler way?
    i think it is impossible to use the proxmox gui afterwards?
     
  10. symmcom

    symmcom Well-Known Member

    Joined:
    Oct 28, 2012
    Messages:
    1,075
    Likes Received:
    25
    You are trying what i have been trying for last several months until recently i decided to leave it behind.

    Basically you edit the crushmap to create another bucket and tell the SSD or HDD pool to use that ruleset. Yes proxmox GUI cannot do it. It has be done manually. But the problem is, Proxmox GUI will only show one OSD list from one bucket. So for the other bucket you will fully rely on fully command line management, which is not that hard.
    The big reason i left it behind is performance issue. My both SSD and HDD pool was on same node and both pools took performance penalty. Now i decided to get rid of SSDs and filled all 3 nodes with HDDs. I did a benchmark on this thread: http://forum.proxmox.com/threads/18580-CEPH-desktop-class-HDD-benchmark with HDDs for OSDs.

    One another big negative point: When you reboot the node ALL OSDs in additional bucket will move to default bucket!!! You MUST remember to manually move the OSDs back to their bucket everytime you reboot. If you dont, then the CEPH cluster will go into rebalancing! I could not avoid this everytime i rebooted.

    This is NOT Proxmox issue.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    #110 symmcom, May 28, 2014
    Last edited: May 28, 2014
  11. felipe

    felipe Member

    Joined:
    Oct 28, 2013
    Messages:
    152
    Likes Received:
    1
    >> The big reason i left it behind is performance issue. My both SSD and HDD pool was on same node and both pools took performance penalty
    ok and how strong is your server? ram cpu?
    you have only 1gig ethernet.
    we will use 2 x 10gig cards. so we have 20gig for osd traffic and 20 for mons



    >> One another big negative point: When you reboot the node ALL OSDs in additional bucket will move to default bucket!!! You MUST remember to manually move the OSDs back to their bucket everytime you reboot. If you dont, then the >> CEPH cluster will go into rebalancing! I could not avoid this everytime i rebooted.

    this should fix it:
    osd crush update on start = false' in your ceph.conf

    but generally it seems just a few people use ssd and sata separate. but we have vm machines with totaly different workload -and it makes no sense to use just one big pool for that...
     
  12. symmcom

    symmcom Well-Known Member

    Joined:
    Oct 28, 2012
    Messages:
    1,075
    Likes Received:
    25


    CEPH nodes had 32GB RAM in each with i3 CPUs. Indeed i had 1gig network, but all these 3 CPU,RAM and Bandwidth has been monitored over several to get accurate consumption data. Yes network bandwidth may not be enough during rebalancing due to node or OSD loss, but generally none of these 3 were consumed more 15% any given time on day to day basis. I dont believe the performance bottle neck was due to any of these. In my case both SSD and OSD were on same node.

    On same hardware nodes with 36 OSDs and one pool things are lot faster. I didnt add any extra RAM, nor changed the CPU or increased 1gig network.

    I did separate SSD cluster from SATA in my setup. Again same configuration of 32GB RAM, i3 CPU and gigabit network. SSD pools are significantly faster now than they were before with coexisting SATA pool. I am also using same SSDs i used before.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  13. felipe

    felipe Member

    Joined:
    Oct 28, 2013
    Messages:
    152
    Likes Received:
    1
    what was your bench with mixed ssd & sata?
    the question is if a server with a lot more cpu and ram & nic power could handle that without performance loss...
    our machines have 2 x hexacore cpu and 128 ram.
     
  14. impire

    impire Member

    Joined:
    Jun 10, 2010
    Messages:
    106
    Likes Received:
    0
    I can't upload to the local drives on the nodes. I get the below error. I've tried searching Google and everywhere else. Nothing seems to help. Can anyone help to point me in the right direction?

    "Error: 500: can't activate storage 'local' on node 'CEPH1' "
     
  15. impire

    impire Member

    Joined:
    Jun 10, 2010
    Messages:
    106
    Likes Received:
    0
    Thank you.

    1) So I just need to keep in mind of the maximum my users can store?

    For example, if I do 3 replicas on a 30TB, then I just need to make sure used space can be no more than 9.5TB or 10TB?

    2) Another question. What is the main advantage to having 3 replicas versus 2 replicas? Logically I know we have 3 copies of data. But why do that when the data is already replicated? What's the point of having 3 sets of data? Sorry for the newbie question.

    Thanks in advance for your help.
     
  16. impire

    impire Member

    Joined:
    Jun 10, 2010
    Messages:
    106
    Likes Received:
    0
    It won't work. "Error: 500: can't activate storage 'local' on node 'CEPH1' "
     
  17. symmcom

    symmcom Well-Known Member

    Joined:
    Oct 28, 2012
    Messages:
    1,075
    Likes Received:
    25
    Were you talking about image #1 or image #2 when you said 'need to make sure used space can be no more than......'?
    Image#1
    POOL-1.PNG

    Image #2
    POOL-2.PNG

    Image #1 shows the actual usage by a CEPH pool and Image #2 shows cluster wide available and used space. I think watching the image #2 will keep you informed if your cluster running out of space or not. Image #1 will give you better idea which CEPH pool using how much.

    This is my opinion based on experience; I think using the following formula to decide on Replica number is good idea:
    # of Node - 1 = Replica

    So a 3 node CEPH cluster will have replica of 2. A 5 Node CEPH cluster will have replica of 4 and so on. Having 3 replicas 3 nodes will allow 2 simultaneous nodes failure and still keep functioning fine. Whereas with 2 replicas, 2 node failures may cause huge issue. What are the chances that 2 machines will fail at the same time. Dont be confuse when say with 3 replicas everything will work just fine even with 2 node failure, cluster will still face rebalancing thus making everything slower while cluster tries to recover. But i dont believe you will face any data loss due to stuck, unclean or stale PGs. Hope this makes sense.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. impire

    impire Member

    Joined:
    Jun 10, 2010
    Messages:
    106
    Likes Received:
    0
    I just did a fresh install of the nodes from scratch. Setup CEPH and activated OSDs.

    Straight out of the box, I cannot upload the ISO to the local drive (boot) of any node. Can anybody shed a light on this problem?

    "Error: 500: can't activate storage 'local' on node 'CEPH1' "

    Perhaps this is a bug of ProxMox and/or combination of CEPH.
     
  19. symmcom

    symmcom Well-Known Member

    Joined:
    Oct 28, 2012
    Messages:
    1,075
    Likes Received:
    25
    It could not possibly be because of Proxmox and CEPH combination. Proxmox does not manipulate CEPH in any way, it just uses API of CEPH.

    How big is the ISO file you are trying to upload? Are you trying to do it through the Web interface? Did you try to upload it through FTP program such as FileZilla?

    I have noticed the GUI have trouble if the ISO is more than about 400MB. I do not think it is Proxmox limitation but HTTP protocol itself for file uploading.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  20. impire

    impire Member

    Joined:
    Jun 10, 2010
    Messages:
    106
    Likes Received:
    0
    Hi,

    The file size is only 120MB to 250MB.

    I have not tried to upload it through FTP program like FileZilla.

    Right now, I can still download the templates to local directory, but uploading ISO of any size will give that 500 error.

    I re-installed everything from scratch. Before adding CEPH, it worked fine uploading to local directory. After adding CEPTH, same exact problem.

    Straight out of the box, this is definitely a promox bug! I don't think I am the only one encountering this issue.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice