1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

[SOLVED] CEPH - rbd error: rbd: couldn't connect to cluster (500)

Discussion in 'Proxmox VE: Installation and configuration' started by alchemyx, Apr 14, 2014.

  1. alchemyx

    alchemyx New Member

    Joined:
    Apr 2, 2014
    Messages:
    7
    Likes Received:
    0
    Hello,I did CEPH installation according to this - http://pve.proxmox.com/wiki/Ceph_ServerEverything went fine. Done all steps, CEPH seems to work:
    Code:
    root@proxmox-A:/etc/pve/priv/ceph# pveceph  lspoolsName                       size     pg_num                 useddata                          2         64                    0metadata                      2         64                    0rbd                           2         64                    0
    storage.cfg:
    Code:
    rbd: rbd	monhost 10.10.10.1,10.10.10.2,10.10.10.3	pool rbd	content images	username admindir: local	path /var/lib/vz	content images,iso,vztmpl,rootdir	maxfiles 0
    But when I click on RBD entry I get rbd error: rbd: couldn't connect to cluster (500)Do you have maybe some clues how to solve that? Thanks!
     
    #1 alchemyx, Apr 14, 2014
    Last edited: Apr 14, 2014
  2. alchemyx

    alchemyx New Member

    Joined:
    Apr 2, 2014
    Messages:
    7
    Likes Received:
    0
    So one thing was wrong, how I defined monlist (it should be spaces not commas). Also I did change the name to avoid some weird issues but it still does not help:
    Code:
    dir: local	path /var/lib/vz	content images,iso,vztmpl,rootdir	maxfiles 0rbd: shared	monhost 10.10.10.1 10.10.10.2 10.10.10.3	pool rbd	content images	username admin
    PS. I have no idea why it formats so weirdly here so pasting it to pastebin also - http://pastebin.com/Ee0S3GD0OK Solved NOW. Most stupid error ever - a typo in key name: /etc/pve/priv/ceph/shred.keyring instead of shared.keyring. Sorry!
     
    #2 alchemyx, Apr 14, 2014
    Last edited: Apr 14, 2014
  3. MACscr

    MACscr Member

    Joined:
    Mar 19, 2013
    Messages:
    87
    Likes Received:
    2
    Are we sure its supposed to be spaces? The wiki actually has semicolons between them:

    Code:
    # from /etc/pve/storage.cfg
    rbd: my-ceph-storage
         monhost 10.10.10.1;10.10.10.2;10.10.10.3
         pool rbd
         content images
         username admin
    Though im getting the same connection error as well with spaces or semicolons. =/
     
  4. MACscr

    MACscr Member

    Joined:
    Mar 19, 2013
    Messages:
    87
    Likes Received:
    2
    Nevermind, Resolved. Issue was the keyring as well. I thought that was only needed if you didnt use the gui. Sad they dont automated that. Really seems half baked.
     
  5. MACscr

    MACscr Member

    Joined:
    Mar 19, 2013
    Messages:
    87
    Likes Received:
    2
    So others don't waste time, i had to do the following in my storage.cfg for the gui to connect to it:

    Code:
    rbd: ceph-ssd
        monhost 10.10.0.100:6789 10.10.0.104:6789 10.10.0.108:6789
        content images,rootdir
        pool rbd
        krbd
        username admin
    EDIT: I lied. I was on the wrong tab. Still get same 500 error as you did originally. So weird.
     
    #5 MACscr, Nov 23, 2015
    Last edited: Nov 24, 2015
  6. D.P.

    D.P. New Member

    Joined:
    May 13, 2016
    Messages:
    19
    Likes Received:
    0
  7. tomas666

    tomas666 New Member

    Joined:
    Sep 9, 2016
    Messages:
    2
    Likes Received:
    0
    Hi,
    I'm getting same error rbd: couldn't connect to cluster
    proxmox 4.2
    I've tried almost everything
    keyring is presented
    from command line everything works
    just web gui don't see space available, cant use for VMs
    content is unavailable thus rbd: couldn't connect to cluster (500)
     
  8. tomas666

    tomas666 New Member

    Joined:
    Sep 9, 2016
    Messages:
    2
    Likes Received:
    0
    and please clarify what is right syntax for monitor host
    semicolon? space? comma? with or without ports ?
    btw i'm using mesh network for ceph, neverthless network and ceph works
     
  9. Davyd

    Davyd New Member

    Joined:
    Apr 8, 2016
    Messages:
    13
    Likes Received:
    0
    For me, everything is working fine with this storage config, without ports:
    Code:
    rbd: ceph-rbd
            monhost 10.200.0.3;10.200.0.1;10.200.0.2
            username admin
            content images
            pool rbd
    
    When I built this cluster, I have the same problem, but it was my mistake with keyring file location.
    Or you can disable ceph auth by setting this lines in ceph.conf
    Code:
    [global]
             auth client required = none
             auth cluster required = none
             auth service required = none
    
     
  10. trekkygeek

    trekkygeek New Member

    Joined:
    Oct 23, 2016
    Messages:
    7
    Likes Received:
    0
    copy the keyring into /etc/pve/priv/ceph and rename to the same name as your pool.

    Ex.:

    storage.cfg:
    rbd: ceph-ssd
    monhost x.x.x.x:6789;y.y.y.y:6789;z.z.z.z:6789
    content rootdir,images
    username admin
    pool ssd

    rbd: ceph-sata
    monhost x.x.x.x:6789;y.y.y.y:6789;z.z.z.z:6789
    content rootdir,images
    username admin
    pool sata

    #/etc/pve/priv/ceph# ls -l
    total 1
    -rw------- 1 root www-data 63 Nov 17 23:51 ceph-sata.keyring
    -rw------- 1 root www-data 63 Nov 17 23:51 ceph-ssd.keyring
     
  11. aschmitt

    aschmitt New Member

    Joined:
    Dec 21, 2016
    Messages:
    4
    Likes Received:
    0
    Quick Question. Where is the actual Key that we have to copy located?
     
  12. aschmitt

    aschmitt New Member

    Joined:
    Dec 21, 2016
    Messages:
    4
    Likes Received:
    0
    haha. Never mind, I found it in the Ceph Server documentation. For the record, I'll copy it here for anyone else:

    You also need to copy the keyring to a predefined location.
    Note that the file name needs to be storage id + .keyring . storage id is the expression after 'rbd:' in /etc/pve/storage.cfg which is my-ceph-storage in the current example.
    # cd /etc/pve/priv/
    # mkdir ceph
    # cp /etc/ceph/ceph.client.admin.keyring ceph/my-ceph-storage.keyring
     
  13. RealVaVa

    RealVaVa New Member

    Joined:
    Jun 22, 2017
    Messages:
    13
    Likes Received:
    0
    I have same trouble.

    After copying keyring to pve/ceph directory on the first node, problem was fixed, but only on the first node.
    Example

    cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/ceph-rbd.keyring

    ceph-rbd - it is name of my pool

    How I can fix that on second node?
     

Share This Page