Ceph on Proxmox 4.3

badweezy

New Member
Nov 16, 2016
6
0
1
30
Hello,


I am using Proxmox 4.3 with 3 nodes. I have a few problems when I want to install Ceph:

- In the wiki, they say “execute the following on all: pveceph install -version hammer” but when I have an error message “unable to download ceph release key: 500 Can't connect to git.ceph.com:443”. What can I do about this?

- I’ve installed Ceph even with this problem. Everything went OK, but now I can't access to my datastore; I've got this error "rbd error: couldn'd connect to the cluster! (500)"

Here's my /etc/pve/storage.cfg:
Code:
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

rbd: Storage
        monhost 10.0.7.105:6789 10.0.7.106:6789 10.0.7.107:6789
        pool Pool
        content rootdir,images
        krbd 0
        content images
        username admin (optional, default = admin)

And here's my /etc/pve/ceph.conf:

Code:
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

rbd: Storage
        monhost 10.0.7.105:6789 10.0.7.106:6789 10.0.7.107:6789
        pool Pool
        content rootdir,images
        krbd 0
        content images
        username admin (optional, default = admin)

root@pve:~# cat /etc/pve/ceph.conf
[global]
         auth client required = cephx
         auth cluster required = cephx
         auth service required = cephx
         cluster network = 10.0.7.0/24
         filestore xattr use omap = true
         fsid = 14bfa5b2-a46d-479e-9df6-b29232158f46
         keyring = /etc/pve/priv/$cluster.$name.keyring
         osd journal size = 5120
         osd pool default min size = 1
         public network = 191.230.72.0/24

[osd]
         keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.2]
         host = pve2
         mon addr = 10.0.7.107:6789

[mon.1]
         host = pve1
         mon addr = 10.0.7.106:6789

[mon.0]
         host = pve
         mon addr = 10.0.7.105:6789

I’ve already did the step:

Code:
Cd /etc/pve/priv/
Mkdir ceph
Cp /etc/ceph/ceph.client.admin.keyring ceph/Storage.keyring

I don't know where it went wrong... Can you please help me ?

Thanks a lot
 
Hello,


I am using Proxmox 4.3 with 3 nodes. I have a few problems when I want to install Ceph:

- In the wiki, they say “execute the following on all: pveceph install -version hammer” but when I have an error message “unable to download ceph release key: 500 Can't connect to git.ceph.com:443”. What can I do about this?
Hi,
sound that you don't permit https-access to outside?

And due this error I assume, that you installed the debin-default packages?
What is the output of
Code:
ceph -v
dpkg -l | grep ceph
Udo
 
Hi Udo,

Thanks for your answer.

My proxy files permit access to https from the outside. I will check that

Here's the return for ceph -v :

Code:
root@pve:~# ceph -v
ceph version 0.94.9 (fe6d859066244b97b24f09d46552afc2071e6f90)

And for dpkg -l | grep ceph

Code:
root@pve:~# dpkg -l | grep ceph
ii  ceph                                 0.94.9-1~bpo80+1               amd64        distributed storage and file system
ii  ceph-common                          0.94.9-1~bpo80+1               amd64        common utilities to mount and interact with a ceph storage cluster
ii  ceph-deploy                          1.5.35                         all          Ceph-deploy is an easy to use configuration tool
ii  ceph-fs-common                       0.94.9-1~bpo80+1               amd64        common utilities to mount and interact with a ceph file system
ii  ceph-fuse                            0.94.9-1~bpo80+1               amd64        FUSE-based client for the Ceph distributed file system
ii  ceph-mds                             0.94.9-1~bpo80+1               amd64        metadata server for the ceph distributed file system
ii  libcephfs1                           0.94.9-1~bpo80+1               amd64        Ceph distributed file system client library
ii  python-ceph                          0.94.9-1~bpo80+1               amd64        Meta-package for python libraries for the Ceph libraries
ii  python-cephfs                        0.94.9-1~bpo80+1               amd64        Python libraries for the Ceph libcephfs library
 
I've installed ceph-deploy for some tests, should I remove it?

For ceph -s :

Code:
root@pve:~# ceph -s
2016-11-16 17:01:36.460543 7fa6f64ac700  0 librados: client.admin authentication error (95) Operation not supported
Error connecting to cluster: Error

Yes my pool is called Pool

Code:
rados lspools
2016-11-16 17:02:56.029139 7f4af2332780  0 librados: client.admin authentication error (95) Operation not supported
couldn't connect to cluster! error -95
 
I've installed ceph-deploy for some tests, should I remove it?

For ceph -s :

Code:
root@pve:~# ceph -s
2016-11-16 17:01:36.460543 7fa6f64ac700  0 librados: client.admin authentication error (95) Operation not supported
Error connecting to cluster: Error

Yes my pool is called Pool

Code:
rados lspools
2016-11-16 17:02:56.029139 7f4af2332780  0 librados: client.admin authentication error (95) Operation not supported
couldn't connect to cluster! error -95
Hi,
your ceph is'n running well (or your ceph.conf don't fit to you running cluster).
What is the output of
Code:
systemctl status ceph
on all ceph-nodes?

Udo

(some hour afk)
 
Hi Udo,

Here's the output:

On pve:
Code:
root@pve:~# systemctl status ceph
● ceph.service - LSB: Start Ceph distributed file system daemons at boot time
  Loaded: loaded (/etc/init.d/ceph)
  Active: active (exited) since Thu 2016-11-17 08:14:30 CET; 19min ago
  Process: 4338 ExecStop=/etc/init.d/ceph stop (code=exited, status=0/SUCCESS)
  Process: 4519 ExecStart=/etc/init.d/ceph start (code=exited, status=0/SUCCESS)

Nov 17 08:14:29 pve ceph[4519]: create-or-move updated item name 'osd.0' weight 0.07 at location {host=pve...sh map
Nov 17 08:14:29 pve ceph[4519]: Starting Ceph osd.0 on pve...
Nov 17 08:14:30 pve ceph[4519]: Running as unit ceph-osd.0.1479366869.827514540.service.
Nov 17 08:14:30 pve ceph[4519]: === mon.0 ===
Nov 17 08:14:30 pve ceph[4519]: Starting Ceph mon.0 on pve...
Nov 17 08:14:30 pve ceph[4519]: Running as unit ceph-mon.0.1479366870.178975016.service.
Nov 17 08:14:30 pve ceph[4519]: Starting ceph-create-keys on pve...
Nov 17 08:14:30 pve ceph[4519]: === osd.0 ===
Nov 17 08:14:30 pve ceph[4519]: Starting Ceph osd.0 on pve...already running
Nov 17 08:14:30 pve systemd[1]: Started LSB: Start Ceph distributed file system daemons at boot time.
Hint: Some lines were ellipsized, use -l to show in full.

On pve1:

Code:
root@pve1:~# systemctl status ceph
● ceph.service - LSB: Start Ceph distributed file system daemons at boot time
  Loaded: loaded (/etc/init.d/ceph)
  Active: active (exited) since Wed 2016-11-16 14:17:34 CET; 18h ago
  Process: 1298 ExecStart=/etc/init.d/ceph start (code=exited, status=0/SUCCESS)

Nov 16 14:17:27 pve1 ceph[1298]: Starting Ceph mon.1 on pve1...
Nov 16 14:17:27 pve1 ceph[1298]: Running as unit ceph-mon.1.1479302247.581239793.service.
Nov 16 14:17:27 pve1 ceph[1298]: Starting ceph-create-keys on pve1...
Nov 16 14:17:29 pve1 ceph[1298]: === osd.1 ===
Nov 16 14:17:34 pve1 ceph[1298]: 2016-11-16 14:17:34.241590 7f6f5cb62700 -1 monclient: _check_auth_rotatin...41587)
Nov 16 14:17:34 pve1 ceph[1298]: create-or-move updated item name 'osd.1' weight 0.07 at location {host=pv...sh map
Nov 16 14:17:34 pve1 ceph[1298]: Starting Ceph osd.1 on pve1...
Nov 16 14:17:34 pve1 ceph[1298]: Running as unit ceph-osd.1.1479302249.387524136.service.
Nov 16 14:17:34 pve1 systemd[1]: Started LSB: Start Ceph distributed file system daemons at boot time.
Nov 17 08:07:53 pve1 systemd[1]: Started LSB: Start Ceph distributed file system daemons at boot time.
Hint: Some lines were ellipsized, use -l to show in full.

And on pve2:

Code:
root@pve2:~# systemctl status ceph
● ceph.service - LSB: Start Ceph distributed file system daemons at boot time
  Loaded: loaded (/etc/init.d/ceph)
  Active: active (exited) since Wed 2016-11-16 14:17:34 CET; 18h ago
  Process: 1285 ExecStart=/etc/init.d/ceph start (code=exited, status=0/SUCCESS)

Nov 16 14:17:24 pve2 ceph[1285]: Starting Ceph mon.2 on pve2...
Nov 16 14:17:24 pve2 ceph[1285]: Running as unit ceph-mon.2.1479302244.729530037.service.
Nov 16 14:17:24 pve2 ceph[1285]: Starting ceph-create-keys on pve2...
Nov 16 14:17:25 pve2 ceph[1285]: === osd.2 ===
Nov 16 14:17:26 pve2 ceph[1285]: 2016-11-16 14:17:26.856772 7f308033a700  0 -- :/3748919227 >> 10.0.7.105:....fault
Nov 16 14:17:34 pve2 ceph[1285]: create-or-move updated item name 'osd.2' weight 0.07 at location {host=pv...sh map
Nov 16 14:17:34 pve2 ceph[1285]: Starting Ceph osd.2 on pve2...
Nov 16 14:17:34 pve2 ceph[1285]: Running as unit ceph-osd.2.1479302245.930876200.service.
Nov 16 14:17:34 pve2 systemd[1]: Started LSB: Start Ceph distributed file system daemons at boot time.
Nov 17 08:07:55 pve2 systemd[1]: Started LSB: Start Ceph distributed file system daemons at boot time.
Hint: Some lines were ellipsized, use -l to show in full.
 
Last edited:
Here's my ceph -s after a restart:

Code:
root@pve:~# ceph -s
    cluster 14bfa5b2-a46d-479e-9df6-b29232158f46
     health HEALTH_OK
     monmap e3: 3 mons at {0=10.0.7.105:6789/0,1=10.0.7.106:6789/0,2=10.0.7.107:6789/0}
            election epoch 118, quorum 0,1,2 0,1,2
     osdmap e151: 3 osds: 3 up, 3 in
      pgmap v484: 292 pgs, 4 pools, 0 bytes data, 0 objects
            126 MB used, 224 GB / 224 GB avail
                 292 active+clean
 
Hello, sorry for my intromission. I have a 2 nodes + 3rd node virtualized just for quorum purposes (proxmox HA). Can I setup CEPH in this configuration, without having the actual data replicated to the third node since it's just a virtualized machine? Thank you!
 
Here's my ceph -s after a restart:

Code:
root@pve:~# ceph -s
    cluster 14bfa5b2-a46d-479e-9df6-b29232158f46
     health HEALTH_OK
     monmap e3: 3 mons at {0=10.0.7.105:6789/0,1=10.0.7.106:6789/0,2=10.0.7.107:6789/0}
            election epoch 118, quorum 0,1,2 0,1,2
     osdmap e151: 3 osds: 3 up, 3 in
      pgmap v484: 292 pgs, 4 pools, 0 bytes data, 0 objects
            126 MB used, 224 GB / 224 GB avail
                 292 active+clean
Hi,
ok - looks good now.
Can you now see static of the ceph-pool inside the proxmox-gui? (space, free)

Udo
 
Hello, sorry for my intromission. I have a 2 nodes + 3rd node virtualized just for quorum purposes (proxmox HA). Can I setup CEPH in this configuration, without having the actual data replicated to the third node since it's just a virtualized machine? Thank you!

I can't really help you, I'm a newbie in proxmox...

Hi,
ok - looks good now.
Can you now see static of the ceph-pool inside the proxmox-gui? (space, free)

Udo

I still can't access to my storage - I still got my rbd error. I really don't know where I went wrong.

Here's a quick look of my pools via gui:

[URL=http://www.hostingpics.net/viewer.php?id=90497420161117154723pveProxmoxVirtualEnvironmentfourniparCLEMESSYDSI29022016.png][/URL]
 
I can't really help you, I'm a newbie in proxmox...



I still can't access to my storage - I still got my rbd error. I really don't know where I went wrong.

Here's a quick look of my pools via gui:
Hi,
ok, we will proof the next step...

edit your /etc/pve/storage.cfg so that the rbd-section looks like this:
Code:
rbd: Storage
  monhost 10.0.7.105;10.0.7.106;10.0.7.107
      pool Pool
      content images
      krbd 0
      username admin
Verify that the key is the right one. Compare the key and name between both outputs:
Code:
ceph auth get client.admin

cat /etc/pve/priv/ceph/Storage.keyring
Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!