Proxmox Ceph Cluster - OSDs not detected

Neb

Renowned Member
Apr 27, 2017
35
0
71
30
Hi,

My issue is very strange..

I have one cluster of 3 physical servers where is installed Proxmox VE 4.4. I installed on each nodes ceph with 'pveceph install -version hammer' command. I have 3 monitors on the different nodes in my cluster.

Each servers have 1 disk of 1TB and an other where is installed proxmox (300GB). So i would like to create OSDs on the disks of 1TB on my ceph cluster. I ran this command : 'pveceph createosd /dev/sdb' on each nodes but in the GUI i can't see osd disks.

/etc/pve/ceph.conf look like this :
Code:
[global]
    auth client required = cephx
    auth cluster required = cephx
    auth service required = cephx
    cluster network = 10.51.1.0/24
    filestore xattr use omap = true
    fsid = 1897e77f-f5bf-4766-9458-f65227f1ba72
    keyring = /etc/pve/priv/$cluster.$name.keyring
    osd journal size = 5120
    osd pool default min size = 1
    public network = 10.51.1.0/24

[osd]
    keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.0]
    host = px-node-1
    mon addr = 10.51.1.11:6789

[mon.2]
    host = px-node-3
    mon addr = 10.51.1.13:6789

[mon.1]
    host = px-node-2
    mon addr = 10.51.1.12:6789

I have not files / folders in /var/lib/ceph/osd . I think it's not normal ..

I have impression that no OSDs disk is created on my cluster, I'm wrong ?

Thanks

pveversion -v :

Code:
proxmox-ve: 4.4-76 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-1 (running version: 4.4-1/eb2d6f1e)
pve-kernel-4.4.35-1-pve: 4.4.35-76
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-101
pve-firmware: 1.1-10
libpve-common-perl: 4.0-83
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-88
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
ceph: 0.80.7-2+deb8u2

(sorry for my english)
 
Last edited:
i add here the result of 'pveceph status' :

Code:
{
   "fsid" : "1897e77f-f5bf-4766-9458-f65227f1ba72",
   "mdsmap" : {
      "by_rank" : [],
      "in" : 0,
      "max" : 1,
      "up" : 0,
      "epoch" : 1
   },
   "quorum" : [
      0,
      1,
      2
   ],
   "pgmap" : {
      "bytes_total" : 0,
      "version" : 15,
      "num_pgs" : 192,
      "pgs_by_state" : [
         {
            "count" : 192,
            "state_name" : "creating"
         }
      ],
      "bytes_avail" : 0,
      "data_bytes" : 0,
      "bytes_used" : 0
   },
   "quorum_names" : [
      "0",
      "1",
      "2"
   ],
   "health" : {
      "overall_status" : "HEALTH_WARN",
      "health" : {
         "health_services" : [
            {
               "mons" : [
                  {
                     "name" : "0",
                     "kb_avail" : 66288764,
                     "health" : "HEALTH_OK",
                     "kb_used" : 1652564,
                     "avail_percent" : 92,
                     "store_stats" : {
                        "bytes_sst" : 0,
                        "bytes_log" : 186209,
                        "bytes_total" : 257012,
                        "last_updated" : "0.000000",
                        "bytes_misc" : 70803
                     },
                     "last_updated" : "2017-05-15 11:30:36.945317",
                     "kb_total" : 71601512
                  },
                  {
                     "kb_avail" : 66272324,
                     "name" : "1",
                     "health" : "HEALTH_OK",
                     "avail_percent" : 92,
                     "kb_used" : 1669004,
                     "kb_total" : 71601512,
                     "last_updated" : "2017-05-15 11:30:32.870962",
                     "store_stats" : {
                        "bytes_sst" : 0,
                        "bytes_total" : 364933,
                        "bytes_log" : 273185,
                        "last_updated" : "0.000000",
                        "bytes_misc" : 91748
                     }
                  },
                  {
                     "avail_percent" : 92,
                     "kb_used" : 1652204,
                     "kb_total" : 71601512,
                     "last_updated" : "2017-05-15 11:30:55.559906",
                     "store_stats" : {
                        "bytes_misc" : 90303,
                        "last_updated" : "0.000000",
                        "bytes_log" : 273185,
                        "bytes_total" : 363488,
                        "bytes_sst" : 0
                     },
                     "kb_avail" : 66289124,
                     "name" : "2",
                     "health" : "HEALTH_OK"
                  }
               ]
            }
         ]
      },
      "summary" : [
         {
            "severity" : "HEALTH_WARN",
            "summary" : "192 pgs stuck inactive"
         },
         {
            "summary" : "192 pgs stuck unclean",
            "severity" : "HEALTH_WARN"
         }
      ],
      "detail" : [
         "mon.1 addr 10.51.1.12:6789/0 clock skew 0.06402s > max 0.05s (latency 0.00726541s)"
      ],
      "timechecks" : {
         "epoch" : 12,
         "mons" : [
            {
               "name" : "0",
               "skew" : "0.000000",
               "health" : "HEALTH_OK",
               "latency" : "0.000000"
            },
            {
               "details" : "clock skew 0.06402s > max 0.05s",
               "name" : "1",
               "skew" : "-0.064020",
               "health" : "HEALTH_WARN",
               "latency" : "0.007265"
            },
            {
               "latency" : "0.007847",
               "health" : "HEALTH_OK",
               "skew" : "0.009226",
               "name" : "2"
            }
         ],
         "round_status" : "finished",
         "round" : 12
      }
   },
   "monmap" : {
      "created" : "2017-05-15 10:27:56.671400",
      "fsid" : "1897e77f-f5bf-4766-9458-f65227f1ba72",
      "epoch" : 3,
      "modified" : "2017-05-15 10:28:51.539686",
      "mons" : [
         {
            "addr" : "10.51.1.11:6789/0",
            "name" : "0",
            "rank" : 0
         },
         {
            "name" : "1",
            "addr" : "10.51.1.12:6789/0",
            "rank" : 1
         },
         {
            "rank" : 2,
            "addr" : "10.51.1.13:6789/0",
            "name" : "2"
         }
      ]
   },
   "election_epoch" : 12,
   "osdmap" : {
      "osdmap" : {
         "num_in_osds" : 0,
         "nearfull" : false,
         "epoch" : 14,
         "num_up_osds" : 0,
         "num_osds" : 6,
         "full" : false
      }
   }
}
 
Last edited:
Did you do a "apt-get update" then a "apt-get dist-upgrade" before installing ceph?
 
Yes MajorD. I ran update and dist-upgrade before.

----------------------

UP.

Now i've 6 OSDs on my cluster.

When I run 'ceph osd tree --format=json-pretty', I get this result :

Code:
{ "nodes": [
        { "id": -1,
          "name": "default",
          "type": "root",
          "type_id": 10,
          "children": [
                -2]},
        { "id": -2,
          "name": "px-node-3",
          "type": "host",
          "type_id": 1,
          "children": [
                4]},
        { "id": 4,
          "name": "osd.4",
          "exists": 1,
          "type": "osd",
          "type_id": 0,
          "status": "down",
          "reweight": "0.000000",
          "crush_weight": "0.989990",
          "depth": 2}],
  "stray": [
        { "id": 0,
          "name": "osd.0",
          "exists": 1,
          "type": "osd",
          "type_id": 0,
          "status": "down",
          "reweight": "0.000000",
          "id": 1,
          "name": "osd.1",
          "exists": 1,
          "type": "osd",
          "type_id": 0,
          "status": "down",
          "reweight": "0.000000",
          "id": 2,
          "name": "osd.2",
          "exists": 1,
          "type": "osd",
          "type_id": 0,
          "status": "down",
          "reweight": "0.000000",
          "id": 3,
          "name": "osd.3",
          "exists": 1,
          "type": "osd",
          "type_id": 0,
          "status": "down",
          "reweight": "0.000000",
          "id": 5,
          "name": "osd.5",
          "exists": 1,
          "type": "osd",
          "type_id": 0,
          "status": "down",
          "reweight": "0.000000"}]}

Code:
16:07:43 ~ # ceph osd tree                                                   
# id    weight    type name    up/down    reweight
-1    0    root default
0    0    osd.0    down    0

Why have I one OSD in "nodes" section and my other osd in "stray" section ? Moreover their status is display as "down". I don't understand. Someone knows why ?

Thanks
 
Last edited: