Ceph problem when master node is out

Discussion in 'Proxmox VE: Installation and configuration' started by Konstantinos Pappas, Jan 7, 2015.

  1. Konstantinos Pappas

    Konstantinos Pappas New Member

    Joined:
    Jan 7, 2015
    Messages:
    27
    Likes Received:
    0
    Wasim

    i make all the tests via oracle virtual box
     
  2. symmcom

    symmcom Active Member

    Joined:
    Oct 28, 2012
    Messages:
    1,066
    Likes Received:
    24
    I too setup my very first Ceph cluster on Virtual Environment. So i know it works in that environment.

    From all the info you have provided, it is apparent to me that there something going on in your cluster installation. It is not installing/configuring as it should. Weight should no be 0, storage.cfg should does not have entry as i should.

    The only thing i can suggest now is to let one of us remote access to your computer to really see what is going on. I am setup with Teamviewer. You can download Teamviewer QuickSupport from www.teamviewer.com then email me the PartnerID and password given by Teamviewer after installation.

    From all the posting in this thread i dont see a valid reason why it is not working for you.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. Konstantinos Pappas

    Konstantinos Pappas New Member

    Joined:
    Jan 7, 2015
    Messages:
    27
    Likes Received:
    0
    wasin check your email in 5 min.

    thanks
     
  4. udo

    udo Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,835
    Likes Received:
    159
    OK Wasim,
    your part ;-)

    Udo
     
  5. symmcom

    symmcom Active Member

    Joined:
    Oct 28, 2012
    Messages:
    1,066
    Likes Received:
    24
    @Konstantinos
    Not sure if you sent me that email with Teamviewer ID or not but i did not receive anything. If you have not, send it to wahmed@symmcom.com.



    @Udo
    If i cannot figure out heads or tails of it i will hand it to you. :) You are next in line.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. symmcom

    symmcom Active Member

    Joined:
    Oct 28, 2012
    Messages:
    1,066
    Likes Received:
    24
    @Udo,
    As we suspected the weight of OSDs were the only cause of this issue. For reason unknown all 12 OSDs had weight of 0. So no PGs were created even though it showed all OSDs up and in. all 256 PGs were stuck and degraded. As soon as i reweighted them all PGs were created and shutting down node 1 did not effect the Ceph cluster at all.

    I am suspecting it was the combination of Virtual Box and small 8GB virtual disk images for OSDs caused weight of 0. I manually set the weights to 0.006. Konstantinos installed same setup several times following Proxmox wiki to the letter. But weight was 0 every time without any manual intervention.

    But.... it is all working now.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. udo

    udo Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,835
    Likes Received:
    159
    Hi Wasim,
    we all have more time, if they read the posts: http://forum.proxmox.com/threads/20700-Ceph-problem-when-master-node-is-out?p=105666#post105666

    nevertheless - yes 8GB for an disk is to less. you need 10GB to get an weight of 0.01!

    Udo
     
  8. Konstantinos Pappas

    Konstantinos Pappas New Member

    Joined:
    Jan 7, 2015
    Messages:
    27
    Likes Received:
    0
    i wanna say a big thank you to udo and especially to wasim
    the wasim help a lot to understand my wrong, the problem was the disks the default gb is 10gb and my demo environment i put 8gb so the ceph cant recognize.
    thanks alot again
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice