OSD's created but not showing in GUI

lifeboy

Renowned Member
3 nodes, all running PVE 4.4.86 (the latest).
First node: S1 - 3 OSD's
Second node: S2 - 1 OSD
Third node: H1 - 11 OSD's

H1 has an HP Storage Controller, so I had to modify the source to allow cciss drives to show in the GUI as per https://pve.proxmox.com/wiki/Ceph_Server#Note_for_users_of_HP_SmartArray_controllers.

The H1 OSD's show in Proxmox GUI, but the other two nodes' OSD don't.

Also the H1 OSD's don't start.

I have started with S1, installed Proxmox and have some VM's running on LVM volumes. Now I'm adding Ceph to the mix and want to add another 6 or so servers to the cluster. S2 and H1 are the first two. Once they are working properly, I'll add one machine at a time, but I can't figure out what's wrong.

Here's what I see:

upload_2017-4-5_23-55-58.png

However, even the drives don't all show:

upload_2017-4-5_23-57-58.png

Here are the S1 drives:

upload_2017-4-5_23-59-39.png

The config files are:

S1:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 192.168.121.0/24
filestore xattr use omap = true
fsid = 958a6dff-15d3-4a04-b90a-234a886c19d9
keyring = /etc/pve/priv/$cluster.$name.keyring
osd journal size = 5120
osd pool default min size = 1
osd_crush_update_on_start = 1
public network = 192.168.121.0/24

[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.0]
host = s1
mon addr = 192.168.121.33:6789

[mon.2]
host = h1
mon addr = 192.168.121.30:6789

[mon.1]
host = s2
mon addr = 192.168.121.32:6789

with the other two nodes having identical config files.

How do I get the gui to work / show the OSD's properly?

I want to use ceph as rbd storage.
 
Hi,

On which version you are.
Please send pveversion -v
 
$ pveversion -v
proxmox-ve: 4.4-86 (running kernel: 4.4.49-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.8-1-pve: 4.4.8-52
pve-kernel-4.4.35-2-pve: 4.4.35-79
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.10-1-pve: 4.4.10-54
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.40-1-pve: 4.4.40-82
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-49
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-94
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-97
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
ceph: 10.2.2-1~bpo80+1
 
can you post the output of

Code:
ceph osd dump --format=json-pretty
 
also

Code:
ceph osd tree --format=json-pretty
ceph osd crush tree
ceph osd crush dump
please
 
# ceph osd tree --format=json-pretty

{
"nodes": [
{
"id": -1,
"name": "default",
"type": "root",
"type_id": 10,
"children": [
-2
]
},
{
"id": -2,
"name": "h1",
"type": "host",
"type_id": 1,
"children": [
9,
14,
13,
12,
11,
10,
8,
7,
6
]
},
{
"id": 6,
"name": "osd.6",
"type": "osd",
"type_id": 0,
"crush_weight": 1.809998,
"depth": 2,
"exists": 1,
"status": "down",
"reweight": 1.000000,
"primary_affinity": 1.000000
},
{
"id": 7,
"name": "osd.7",
"type": "osd",
"type_id": 0,
"crush_weight": 1.809998,
"depth": 2,
"exists": 1,
"status": "down",
"reweight": 1.000000,
"primary_affinity": 1.000000
},
{
"id": 8,
"name": "osd.8",
"type": "osd",
"type_id": 0,
"crush_weight": 0.899994,
"depth": 2,
"exists": 1,
"status": "down",
"reweight": 1.000000,
"primary_affinity": 1.000000
},
{
"id": 10,
"name": "osd.10",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2,
"exists": 1,
"status": "down",
"reweight": 1.000000,
"primary_affinity": 1.000000
},
{
"id": 11,
"name": "osd.11",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2,
"exists": 1,
"status": "down",
"reweight": 1.000000,
"primary_affinity": 1.000000
},
{
"id": 12,
"name": "osd.12",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2,
"exists": 1,
"status": "down",
"reweight": 1.000000,
"primary_affinity": 1.000000
},
{
"id": 13,
"name": "osd.13",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2,
"exists": 1,
"status": "down",
"reweight": 1.000000,
"primary_affinity": 1.000000
},
{
"id": 14,
"name": "osd.14",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2,
"exists": 1,
"status": "down",
"reweight": 1.000000,
"primary_affinity": 1.000000
},
{
"id": 9,
"name": "osd.9",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2,
"exists": 1,
"status": "down",
"reweight": 1.000000,
"primary_affinity": 1.000000
}
],
"stray": [
{
"id": 0,
"name": "osd.0",
"type": "osd",
"type_id": 0,
"crush_weight": 0.000000,
"depth": 0,
"exists": 1,
"status": "down",
"reweight": 0.000000,
"primary_affinity": 1.000000
},
{
"id": 1,
"name": "osd.1",
"type": "osd",
"type_id": 0,
"crush_weight": 0.000000,
"depth": 0,
"exists": 1,
"status": "down",
"reweight": 0.000000,
"primary_affinity": 1.000000
},
{
"id": 2,
"name": "osd.2",
"type": "osd",
"type_id": 0,
"crush_weight": 0.000000,
"depth": 0,
"exists": 1,
"status": "down",
"reweight": 0.000000,
"primary_affinity": 1.000000
},
{
"id": 3,
"name": "osd.3",
"type": "osd",
"type_id": 0,
"crush_weight": 0.000000,
"depth": 0,
"exists": 1,
"status": "down",
"reweight": 0.000000,
"primary_affinity": 1.000000
},
{
"id": 4,
"name": "osd.4",
"type": "osd",
"type_id": 0,
"crush_weight": 0.000000,
"depth": 0,
"exists": 1,
"status": "down",
"reweight": 0.000000,
"primary_affinity": 1.000000
},
{
"id": 5,
"name": "osd.5",
"type": "osd",
"type_id": 0,
"crush_weight": 0.000000,
"depth": 0,
"exists": 1,
"status": "down",
"reweight": 0.000000,
"primary_affinity": 1.000000
}
]
}
# ceph osd crush tree
[
{
"id": -1,
"name": "default",
"type": "root",
"type_id": 10,
"items": [
{
"id": -2,
"name": "h1",
"type": "host",
"type_id": 1,
"items": [
{
"id": 6,
"name": "osd.6",
"type": "osd",
"type_id": 0,
"crush_weight": 1.809998,
"depth": 2
},
{
"id": 7,
"name": "osd.7",
"type": "osd",
"type_id": 0,
"crush_weight": 1.809998,
"depth": 2
},
{
"id": 8,
"name": "osd.8",
"type": "osd",
"type_id": 0,
"crush_weight": 0.899994,
"depth": 2
},
{
"id": 10,
"name": "osd.10",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2
},
{
"id": 11,
"name": "osd.11",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2
},
{
"id": 12,
"name": "osd.12",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2
},
{
"id": 13,
"name": "osd.13",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2
},
{
"id": 14,
"name": "osd.14",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2
},
{
"id": 9,
"name": "osd.9",
"type": "osd",
"type_id": 0,
"crush_weight": 0.679993,
"depth": 2
}
]
}
]
}
]
# ceph osd crush dump
{
"devices": [
{
"id": 0,
"name": "device0"
},
{
"id": 1,
"name": "device1"
},
{
"id": 2,
"name": "device2"
},
{
"id": 3,
"name": "device3"
},
{
"id": 4,
"name": "device4"
},
{
"id": 5,
"name": "device5"
},
{
"id": 6,
"name": "osd.6"
},
{
"id": 7,
"name": "osd.7"
},
{
"id": 8,
"name": "osd.8"
},
{
"id": 9,
"name": "osd.9"
},
{
"id": 10,
"name": "osd.10"
},
{
"id": 11,
"name": "osd.11"
},
{
"id": 12,
"name": "osd.12"
},
{
"id": 13,
"name": "osd.13"
},
{
"id": 14,
"name": "osd.14"
}
],
"types": [
{
"type_id": 0,
"name": "osd"
},
{
"type_id": 1,
"name": "host"
},
{
"type_id": 2,
"name": "chassis"
},
{
"type_id": 3,
"name": "rack"
},
{
"type_id": 4,
"name": "row"
},
{
"type_id": 5,
"name": "pdu"
},
{
"type_id": 6,
"name": "pod"
},
{
"type_id": 7,
"name": "room"
},
{
"type_id": 8,
"name": "datacenter"
},
{
"type_id": 9,
"name": "region"
},
{
"type_id": 10,
"name": "root"
}
],
"buckets": [
{
"id": -1,
"name": "default",
"type_id": 10,
"type_name": "root",
"weight": 563606,
"alg": "straw",
"hash": "rjenkins1",
"items": [
{
"id": -2,
"weight": 563606,
"pos": 0
}
]
},
{
"id": -2,
"name": "h1",
"type_id": 1,
"type_name": "host",
"weight": 563606,
"alg": "straw",
"hash": "rjenkins1",
"items": [
{
"id": 6,
"weight": 118620,
"pos": 0
},
{
"id": 7,
"weight": 118620,
"pos": 1
},
{
"id": 8,
"weight": 58982,
"pos": 2
},
{
"id": 10,
"weight": 44564,
"pos": 3
},
{
"id": 11,
"weight": 44564,
"pos": 4
},
{
"id": 12,
"weight": 44564,
"pos": 5
},
{
"id": 13,
"weight": 44564,
"pos": 6
},
{
"id": 14,
"weight": 44564,
"pos": 7
},
{
"id": 9,
"weight": 44564,
"pos": 8
}
]
}
],
"rules": [
{
"rule_id": 0,
"rule_name": "replicated_ruleset",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}
],
"tunables": {
"choose_local_tries": 0,
"choose_local_fallback_tries": 0,
"choose_total_tries": 50,
"chooseleaf_descend_once": 1,
"chooseleaf_vary_r": 1,
"chooseleaf_stable": 0,
"straw_calc_version": 1,
"allowed_bucket_algs": 22,
"profile": "firefly",
"optimal_tunables": 0,
"legacy_tunables": 0,
"minimum_required_version": "firefly",
"require_feature_tunables": 1,
"require_feature_tunables2": 1,
"has_v2_rules": 0,
"require_feature_tunables3": 1,
"has_v3_rules": 0,
"has_v4_buckets": 0,
"require_feature_tunables5": 0,
"has_v5_rules": 0
}
}
 
not really, no,

i see that you have several osds as "stray" but how did it get to this?

how did you add the osds (commandline would be helpful?)

what does ceph status say?
 
No, I don't know how it got to this. Everything was done via the GUI. What does "stray" even mean? I can't seem to find anything about it.

Of course I could simply remove all the OSD's and add them again? I believe I done that, but I'll do it again, ensuring that there are no traces left of them after removal before adding them again.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!