Ceph OSD not visible after pveceph createosd

lrab

New Member
Jul 28, 2016
9
0
1
52
After installing 3 new OSD disks with: pveceph createosd /dev/sdf on
each ceph node,
the new OSDs do not show up in the GUI and "ceph osd tree" looks not OK:

# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 15.65991 root default
-2 5.21997 host ceph01
0 1.73999 osd.0 up 1.00000 1.00000
1 1.73999 osd.1 up 1.00000 1.00000
6 1.73999 osd.6 up 1.00000 1.00000
-3 5.21997 host ceph02
2 1.73999 osd.2 up 1.00000 1.00000
3 1.73999 osd.3 up 1.00000 1.00000
7 1.73999 osd.7 up 1.00000 1.00000
-4 5.21997 host ceph03
4 1.73999 osd.4 up 1.00000 1.00000
5 1.73999 osd.5 up 1.00000 1.00000
8 1.73999 osd.8 up 1.00000 1.00000
9 0 osd.9 up 1.00000 1.00000
10 0 osd.10 up 1.00000 1.00000
11 0 osd.11 up 1.00000 1.00000

There is no host assigned to the OSD? This is on pve-4.4.13.

# ceph osd stat
osdmap e2174: 12 osds: 12 up, 12 in

The OSD's are not used by ceph and don't show up in the crush map.
Any ideas?
 
Which ceph version do you use? How does your ceph.conf look like? Is anything in the log files /var/log/ceph/ ?
 
The ceph version is "0.94.10". osd9,ods10,osd11 are not usable.

# cat /etc/ceph/ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
filestore xattr use omap = true
fsid = xxxx....
keyring = /etc/pve/priv/$cluster.$name.keyring
osd journal size = 5120
osd pool default min size = 1
public network = 10.88.88.0/24
cluster network = 10.88.80.0/24

[client]
rbd_cache_writethrough_until_flush = true
rbd_cache_size = 2147483648
rbd_cache = true

[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
osd crush update on start = false
osd disk thread ioprio priority = 7
osd disk thread ioprio class = idle
osd client op priority = 63
osd snap trim sleep = 0.25
osd recovery op priority = 1
osd recovery max active = 1
osd recovery threads = 1
osd scrub begin hour = 1
osd scrub end hour = 7
osd mount options xfs = rw,noatime,nodiratime,inode64,logbsize=256k
osd max backfills = 1
osd max scrubs = 1
osd_op_num_shards = 10
osd_enable_op_tracker = "false"
filestore_fd_cache_shards = 32
filestore_fd_cache_size = 64
filestore_max_inline_xattr_size = 0
filestore_max_inline_xattr_size_xfs = 0
filestore_max_sync_interval = 10
filestore_omap_header_cache_size = 409600
filestore_queue_max_bytes = 1048576000
filestore_queue_max_ops = 500
journal_max_write_bytes = 1048576000
journal_max_write_entries = 1000
journal_queue_max_bytes = 1048576000
journal_queue_max_ops = 3000
filestore_wbthrottle_enable = "false"

[mon.0]
host = ceph01
mon addr = 10.88.88.101:6789

[mon.2]
host = ceph03
mon addr = 10.88.88.103:6789

[mon.1]
host = ceph02
mon addr = 10.88.88.102:6789


[osd.0]
host = ceph01
public addr = 10.88.88.101
cluster addr = 10.88.80.101
[osd.1]
host = ceph01
public addr = 10.88.88.101
cluster addr = 10.88.80.101
[osd.2]
host = ceph02
public addr = 10.88.88.102
cluster addr = 10.88.80.102
[osd.3]
host = ceph02
public addr = 10.88.88.102
cluster addr = 10.88.80.102
[osd.4]
host = ceph03
public addr = 10.88.88.103
cluster addr = 10.88.80.103
[osd.5]
host = ceph03
public addr = 10.88.88.103
cluster addr = 10.88.80.103
[osd.6]
host = ceph01
public addr = 10.88.88.101
cluster addr = 10.88.80.101
[osd.7]
host = ceph02
public addr = 10.88.88.102
cluster addr = 10.88.80.102
[osd.8]
host = ceph03
public addr = 10.88.88.103
cluster addr = 10.88.80.103
[osd.9]
host = ceph01
public addr = 10.88.88.101
cluster addr = 10.88.80.101
[osd.10]
host = ceph03
public addr = 10.88.88.103
cluster addr = 10.88.80.103
[osd.11]
host = ceph02
public addr = 10.88.88.102
cluster addr = 10.88.80.102

Thanks,
Lutz
 
From the logfile:

c=0x5c529a0).accept failed to getpeername (107) Transport endpoint is not connected
c=0x51ebde0).fault with nothing to send, going to standby



# cat ceph-osd.9.log
2017-08-02 10:14:35.626290 7fcacf5ef880 0 ceph version 0.94.10 (b1e0532418e4631af01acbc0cedd426f1905f4af), process ceph-osd, pid 17561
2017-08-02 10:14:35.630920 7fcacf5ef880 1 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) mkfs in /var/lib/ceph/tmp/mnt.s_4K4o
2017-08-02 10:14:35.630961 7fcacf5ef880 1 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) mkfs fsid is already set to bcb1ef9a-5940-4ab9-b943-99c396db24be
2017-08-02 10:14:35.633258 7fcacf5ef880 0 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) backend xfs (magic 0x58465342)
2017-08-02 10:14:35.642081 7fcacf5ef880 1 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) leveldb db exists/created
2017-08-02 10:14:35.644697 7fcacf5ef880 1 journal _open /var/lib/ceph/tmp/mnt.s_4K4o/journal fd 11: 5367660544 bytes, block size 4096 bytes, directio = 1, aio = 1
2017-08-02 10:14:35.644946 7fcacf5ef880 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected bcb1ef9a-5940-4ab9-b943-99c396db24be, invalid (someone else's?) journal
2017-08-02 10:14:35.644978 7fcacf5ef880 1 journal close /var/lib/ceph/tmp/mnt.s_4K4o/journal
2017-08-02 10:14:35.647365 7fcacf5ef880 1 journal _open /var/lib/ceph/tmp/mnt.s_4K4o/journal fd 11: 5367660544 bytes, block size 4096 bytes, directio = 1, aio = 1
2017-08-02 10:14:35.647889 7fcacf5ef880 0 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) mkjournal created journal on /var/lib/ceph/tmp/mnt.s_4K4o/journal
2017-08-02 10:14:35.647929 7fcacf5ef880 1 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) mkfs done in /var/lib/ceph/tmp/mnt.s_4K4o
2017-08-02 10:14:35.648033 7fcacf5ef880 0 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) backend xfs (magic 0x58465342)
2017-08-02 10:14:35.651056 7fcacf5ef880 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.s_4K4o) detect_features: FIEMAP ioctl is supported and appears to work
2017-08-02 10:14:35.651067 7fcacf5ef880 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.s_4K4o) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2017-08-02 10:14:35.651691 7fcacf5ef880 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.s_4K4o) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2017-08-02 10:14:35.651737 7fcacf5ef880 0 xfsfilestorebackend(/var/lib/ceph/tmp/mnt.s_4K4o) detect_feature: extsize is disabled by conf
2017-08-02 10:14:35.626290 7fcacf5ef880 0 ceph version 0.94.10 (b1e0532418e4631af01acbc0cedd426f1905f4af), process ceph-osd, pid 17561
2017-08-02 10:14:35.630920 7fcacf5ef880 1 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) mkfs in /var/lib/ceph/tmp/mnt.s_4K4o
2017-08-02 10:14:35.630961 7fcacf5ef880 1 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) mkfs fsid is already set to bcb1ef9a-5940-4ab9-b943-99c396db24be
2017-08-02 10:14:35.633258 7fcacf5ef880 0 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) backend xfs (magic 0x58465342)
2017-08-02 10:14:35.642081 7fcacf5ef880 1 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) leveldb db exists/created
2017-08-02 10:14:35.644697 7fcacf5ef880 1 journal _open /var/lib/ceph/tmp/mnt.s_4K4o/journal fd 11: 5367660544 bytes, block size 4096 bytes, directio = 1, aio = 1
2017-08-02 10:14:35.644946 7fcacf5ef880 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected bcb1ef9a-5940-4ab9-b943-99c396db24be, invalid (someone else's?) journal
2017-08-02 10:14:35.644978 7fcacf5ef880 1 journal close /var/lib/ceph/tmp/mnt.s_4K4o/journal
2017-08-02 10:14:35.647365 7fcacf5ef880 1 journal _open /var/lib/ceph/tmp/mnt.s_4K4o/journal fd 11: 5367660544 bytes, block size 4096 bytes, directio = 1, aio = 1
2017-08-02 10:14:35.647889 7fcacf5ef880 0 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) mkjournal created journal on /var/lib/ceph/tmp/mnt.s_4K4o/journal
2017-08-02 10:14:35.647929 7fcacf5ef880 1 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) mkfs done in /var/lib/ceph/tmp/mnt.s_4K4o
2017-08-02 10:14:35.648033 7fcacf5ef880 0 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) backend xfs (magic 0x58465342)
2017-08-02 10:14:35.651056 7fcacf5ef880 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.s_4K4o) detect_features: FIEMAP ioctl is supported and appears to work
2017-08-02 10:14:35.651067 7fcacf5ef880 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.s_4K4o) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2017-08-02 10:14:35.651691 7fcacf5ef880 0 genericfilestorebackend(/var/lib/ceph/tmp/mnt.s_4K4o) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2017-08-02 10:14:35.651737 7fcacf5ef880 0 xfsfilestorebackend(/var/lib/ceph/tmp/mnt.s_4K4o) detect_feature: extsize is disabled by conf
2017-08-02 10:14:35.653901 7fcacf5ef880 0 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2017-08-02 10:14:35.656261 7fcacf5ef880 1 journal _open /var/lib/ceph/tmp/mnt.s_4K4o/journal fd 17: 5367660544 bytes, block size 4096 bytes, directio = 1, aio = 1
2017-08-02 10:14:35.658955 7fcacf5ef880 1 journal _open /var/lib/ceph/tmp/mnt.s_4K4o/journal fd 17: 5367660544 bytes, block size 4096 bytes, directio = 1, aio = 1
2017-08-02 10:14:35.659316 7fcacf5ef880 -1 filestore(/var/lib/ceph/tmp/mnt.s_4K4o) could not find -1/23c2fcde/osd_superblock/0 in index: (2) No such file or directory
2017-08-02 10:14:35.666778 7fcacf5ef880 1 journal close /var/lib/ceph/tmp/mnt.s_4K4o/journal
2017-08-02 10:14:35.667387 7fcacf5ef880 -1 created object store /var/lib/ceph/tmp/mnt.s_4K4o journal /var/lib/ceph/tmp/mnt.s_4K4o/journal for osd.9 fsid 92e4d7d6-c1be-4da8-909d-d4463c922bc9
2017-08-02 10:14:35.667490 7fcacf5ef880 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.s_4K4o/keyring: can't open /var/lib/ceph/tmp/mnt.s_4K4o/keyring: (2) No such file or directory
2017-08-02 10:14:35.668326 7fcacf5ef880 -1 created new key in keyring /var/lib/ceph/tmp/mnt.s_4K4o/keyring
2017-08-02 10:14:36.204550 7f829835f880 0 ceph version 0.94.10 (b1e0532418e4631af01acbc0cedd426f1905f4af), process ceph-osd, pid 17762
2017-08-02 10:14:36.235655 7f829835f880 0 filestore(/var/lib/ceph/osd/ceph-9) backend xfs (magic 0x58465342)
2017-08-02 10:14:36.238262 7f829835f880 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-9) detect_features: FIEMAP ioctl is supported and appears to work
2017-08-02 10:14:36.238273 7f829835f880 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-9) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2017-08-02 10:14:36.238981 7f829835f880 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-9) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2017-08-02 10:14:36.239054 7f829835f880 0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-9) detect_feature: extsize is disabled by conf
2017-08-02 10:14:36.242445 7f829835f880 0 filestore(/var/lib/ceph/osd/ceph-9) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2017-08-02 10:14:36.245121 7f829835f880 1 journal _open /var/lib/ceph/osd/ceph-9/journal fd 20: 5367660544 bytes, block size 4096 bytes, directio = 1, aio = 1
2017-08-02 10:14:36.247583 7f829835f880 1 journal _open /var/lib/ceph/osd/ceph-9/journal fd 20: 5367660544 bytes, block size 4096 bytes, directio = 1, aio = 1
2017-08-02 10:14:36.254308 7f829835f880 0 <cls> cls/hello/cls_hello.cc:271: loading cls_hello
2017-08-02 10:14:36.269042 7f829835f880 0 osd.9 0 crush map has features 33816576, adjusting msgr requires for clients
2017-08-02 10:14:36.269052 7f829835f880 0 osd.9 0 crush map has features 33816576 was 8705, adjusting msgr requires for mons
2017-08-02 10:14:36.269057 7f829835f880 0 osd.9 0 crush map has features 33816576, adjusting msgr requires for osds
2017-08-02 10:14:36.269072 7f829835f880 0 osd.9 0 load_pgs
2017-08-02 10:14:36.269147 7f829835f880 0 osd.9 0 load_pgs opened 0 pgs
2017-08-02 10:14:36.270164 7f829835f880 -1 osd.9 0 log_to_monitors {default=true}
2017-08-02 10:14:36.274606 7f8286754700 0 osd.9 0 ignoring osdmap until we have initialized
2017-08-02 10:14:36.274660 7f8286754700 0 osd.9 0 ignoring osdmap until we have initialized
2017-08-02 10:14:36.274818 7f829835f880 0 osd.9 0 done with init, starting boot process
2017-08-02 10:14:36.276061 7f8286754700 0 osd.9 1505 crush map has features 1107558400, adjusting msgr requires for clients
2017-08-02 10:14:36.276069 7f8286754700 0 osd.9 1505 crush map has features 1107558400 was 33825281, adjusting msgr requires for mons
2017-08-02 10:14:36.276074 7f8286754700 0 osd.9 1505 crush map has features 1107558400, adjusting msgr requires for osds
2017-08-02 10:14:40.372656 7f826adf9700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.101:6806/1003873 pipe(0x5f98000 sd=60 :6800 s=0 pgs=0 cs=0 l=0 c=0x51eb860).accept connect_seq 0 vs existing 0 state connecting
2017-08-02 10:14:40.372832 7f826acf8700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.101:6804/4189 pipe(0x5f9d000 sd=63 :6800 s=0 pgs=0 cs=0 l=0 c=0x51eb9c0).accept connect_seq 0 vs existing 0 state wait
2017-08-02 10:14:40.395595 7f826a5f1700 0 -- 10.88.80.101:6800/17762 >> :/0 pipe(0x5f8e000 sd=39 :6800 s=0 pgs=0 cs=0 l=0 c=0x51ebb20).accept failed to getpeername (107) Transport endpoint is not connected
2017-08-02 10:14:40.395758 7f826a4f0700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.103:6804/3911 pipe(0x5f84000 sd=67 :6800 s=0 pgs=0 cs=0 l=0 c=0x5c526e0).accept connect_seq 0 vs existing 0 state wait
2017-08-02 10:14:40.395880 7f826a3ef700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.103:6800/3593 pipe(0x5fca000 sd=69 :6800 s=0 pgs=0 cs=0 l=0 c=0x51eb860).accept connect_seq 0 vs existing 0 state connecting
2017-08-02 10:14:40.395977 7f826a2ee700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.103:6802/3755 pipe(0x5fcf000 sd=70 :6800 s=0 pgs=0 cs=0 l=0 c=0x51eb9c0).accept connect_seq 0 vs existing 0 state connecting
2017-08-02 10:14:40.396160 7f826a1ed700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.102:6804/3868 pipe(0x5fd4000 sd=71 :6800 s=0 pgs=0 cs=0 l=0 c=0x5c52840).accept connect_seq 0 vs existing 0 state connecting
2017-08-02 10:14:40.396274 7f8269feb700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.102:6802/3687 pipe(0x5fde000 sd=73 :6800 s=0 pgs=0 cs=0 l=0 c=0x5c52b00).accept connect_seq 0 vs existing 0 state connecting
2017-08-02 10:14:40.396320 7f826a0ec700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.102:6806/1003510 pipe(0x5fd9000 sd=72 :6800 s=0 pgs=0 cs=0 l=0 c=0x5c529a0).accept connect_seq 0 vs existing 0 state connecting
2017-08-02 10:50:22.371237 7f826aaf6700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.103:6802/3755 pipe(0x5fb1000 sd=61 :46792 s=2 pgs=86 cs=1 l=0 c=0x51ebde0).fault with nothing to send, going to standby
2017-08-02 10:50:22.371249 7f826a2ee700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.102:6806/1003510 pipe(0x5fb6000 sd=65 :40250 s=2 pgs=94 cs=1 l=0 c=0x5c52000).fault with nothing to send, going to standby
2017-08-02 10:50:22.371267 7f8269feb700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.102:6802/3687 pipe(0x5fa2000 sd=68 :58438 s=2 pgs=57 cs=1 l=0 c=0x51eb5a0).fault with nothing to send, going to standby
2017-08-02 10:50:22.371272 7f826acf8700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.101:6804/4189 pipe(0x5f9d000 sd=63 :6800 s=2 pgs=140 cs=1 l=0 c=0x51eb020).fault with nothing to send, going to standby
2017-08-02 10:50:22.371288 7f826a3ef700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.103:6800/3593 pipe(0x5fca000 sd=69 :6800 s=2 pgs=96 cs=1 l=0 c=0x51ebc80).fault with nothing to send, going to standby
2017-08-02 10:50:22.371312 7f826a1ed700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.102:6804/3868 pipe(0x5fd4000 sd=71 :6800 s=2 pgs=47 cs=1 l=0 c=0x51eb700).fault with nothing to send, going to standby
2017-08-02 10:50:22.374160 7f826adf9700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.101:6806/1003873 pipe(0x5f98000 sd=60 :6800 s=2 pgs=220 cs=1 l=0 c=0x51eb2e0).fault with nothing to send, going to standby
2017-08-02 10:50:22.374160 7f826a6f2700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.101:6802/4034 pipe(0x5f89000 sd=59 :60198 s=2 pgs=165 cs=1 l=0 c=0x51eb180).fault with nothing to send, going to standby
2017-08-02 10:50:22.374861 7f826a4f0700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.103:6804/3911 pipe(0x5f84000 sd=67 :6800 s=2 pgs=97 cs=1 l=0 c=0x51eb440).fault with nothing to send, going to standby
2017-08-02 10:59:59.064628 7f8269ae6700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.103:6806/18166 pipe(0x5fcf000 sd=33 :48942 s=2 pgs=7 cs=1 l=0 c=0x51ebb20).fault with nothing to send, going to standby
2017-08-02 11:15:41.817693 7f82691dd700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.103:6806/18166 pipe(0x64cd000 sd=66 :6800 s=0 pgs=0 cs=0 l=0 c=0x5c531e0).accept connect_seq 1 vs existing 1 state standby
2017-08-02 11:15:41.817987 7f82691dd700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.103:6806/18166 pipe(0x64cd000 sd=66 :6800 s=0 pgs=0 cs=0 l=0 c=0x5c531e0).accept connect_seq 2 vs existing 1 state standby
2017-08-02 11:42:07.935292 7f82691dd700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.103:6806/18166 pipe(0x64cd000 sd=66 :6800 s=2 pgs=20 cs=3 l=0 c=0x51ebb20).fault with nothing to send, going to standby
2017-08-02 12:09:00.295967 7f8269be7700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.102:6804/3868 pipe(0x5fcf000 sd=74 :6800 s=0 pgs=0 cs=0 l=0 c=0x5c53340).accept connect_seq 1 vs existing 1 state standby
2017-08-02 12:09:00.296318 7f8269be7700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.102:6804/3868 pipe(0x5fcf000 sd=74 :6800 s=0 pgs=0 cs=0 l=0 c=0x5c53340).accept connect_seq 2 vs existing 1 state standby
2017-08-02 12:15:18.159932 7f82691dd700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.103:6806/18166 pipe(0x64cd000 sd=66 :50918 s=2 pgs=35 cs=5 l=0 c=0x51ebb20).fault with nothing to send, going to standby
2017-08-02 12:24:00.366288 7f8269be7700 0 -- 10.88.80.101:6800/17762 >> 10.88.80.102:6804/3868 pipe(0x5fcf000 sd=74 :6800 s=2 pgs=55 cs=3 l=0 c=0x51eb700).fault with nothing to send, going to standby
 
What does the ceph-mon.log give us on information? Are the auth keys for the new osds listed in ceph auth list?
 
What does the ceph-mon.log give us on information? Are the auth keys for the new osds listed in ceph auth list?

yes, the new disks are shown in "ceph auth list" .

in "ceph-mon.log" this line shows up with current IO usage at the end:

... pgmap v21172397: 192 pgs: 192 active+clean; 3312 GB data, 11185 GB used, 10206 GB / 21391 GB avail; ...
 
How does your crush map look like? Are the three osds directly in the root bucket or set under hosts?
 
How does your crush map look like? Are the three osds directly in the root bucket or set under hosts?

The three OSDs are not visible at all in the crush map. (at least not in the GUI version of the crsuh map)
Could I add the new OSDs manually to the crsuh map?
 
Yes, by using the following command. ceph osd crush add osd.ID 0 host=<HOSTNAME>
You need to adjust the command to your setup, change the ID and specify the host your osd is in. After you set the osd into the right bucket, you can ajust the weight of the osd.
 
Yes, by using the following command. ceph osd crush add osd.ID 0 host=<HOSTNAME>
You need to adjust the command to your setup, change the ID and specify the host your osd is in. After you set the osd into the right bucket, you can ajust the weight of the osd.


Great! That did the trick. The new OSDs are backfilling right now.
Thanks for the help!
Lutz