Removing the VG just prompted to make the LV go with it (been while since I've had to smoke a VG). In any case I did do a "pvremove /dev/sdg" as there was an LVM signature on the disk. THEN I recreated the osd. New OSD is online and things are starting to move around.
I'm going to pretend its a cake and keep an eye on it since it looks like we're movin data around now:
I'm going to pretend its a cake and keep an eye on it since it looks like we're movin data around now:
Code:
# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 36.67854 - 37 TiB 25 TiB 25 TiB 3.2 MiB 68 GiB 11 TiB 69.41 1.00 - root default
-3 12.22618 - 12 TiB 8.5 TiB 8.4 TiB 1.1 MiB 23 GiB 3.8 TiB 69.25 1.00 - host hv01
0 ssd 1.74660 1.00000 1.7 TiB 1.2 TiB 1.2 TiB 118 KiB 3.3 GiB 589 GiB 67.04 0.97 79 up osd.0
3 ssd 1.74660 1.00000 1.7 TiB 1.2 TiB 1.2 TiB 110 KiB 3.6 GiB 556 GiB 68.91 0.99 84 up osd.3
6 ssd 1.74660 1.00000 1.7 TiB 1.2 TiB 1.2 TiB 103 KiB 3.3 GiB 542 GiB 69.68 1.00 81 up osd.6
9 ssd 1.74660 1.00000 1.7 TiB 1.3 TiB 1.3 TiB 115 KiB 3.4 GiB 456 GiB 74.51 1.07 92 up osd.9
15 ssd 1.74660 1.00000 1.7 TiB 1.2 TiB 1.2 TiB 289 KiB 3.3 GiB 538 GiB 69.93 1.01 87 up osd.15
16 ssd 1.74660 1.00000 1.7 TiB 1.2 TiB 1.2 TiB 106 KiB 2.8 GiB 562 GiB 68.57 0.99 77 up osd.16
17 ssd 1.74660 1.00000 1.7 TiB 1.2 TiB 1.2 TiB 243 KiB 3.1 GiB 606 GiB 66.10 0.95 77 up osd.17
-5 12.22618 - 12 TiB 8.4 TiB 8.3 TiB 1.2 MiB 24 GiB 3.9 TiB 68.35 0.98 - host hv02
1 ssd 1.74660 1.00000 1.7 TiB 1.3 TiB 1.3 TiB 110 KiB 3.6 GiB 477 GiB 73.36 1.06 88 up osd.1
4 ssd 1.74660 1.00000 1.7 TiB 1.2 TiB 1.2 TiB 106 KiB 3.1 GiB 606 GiB 66.10 0.95 82 up osd.4
7 ssd 1.74660 1.00000 1.7 TiB 1.2 TiB 1.2 TiB 272 KiB 3.3 GiB 542 GiB 69.69 1.00 85 up osd.7
10 ssd 1.74660 1.00000 1.7 TiB 1.3 TiB 1.3 TiB 271 KiB 3.6 GiB 474 GiB 73.48 1.06 85 up osd.10
14 ssd 1.74660 1.00000 1.7 TiB 1.2 TiB 1.2 TiB 115 KiB 3.5 GiB 559 GiB 68.74 0.99 79 up osd.14
18 ssd 1.74660 1.00000 1.7 TiB 1.1 TiB 1.1 TiB 242 KiB 3.4 GiB 631 GiB 64.74 0.93 80 up osd.18
19 ssd 1.74660 1.00000 1.7 TiB 1.1 TiB 1.1 TiB 101 KiB 3.1 GiB 674 GiB 62.33 0.90 78 up osd.19
-7 12.22618 - 12 TiB 8.6 TiB 8.6 TiB 940 KiB 22 GiB 3.6 TiB 70.64 1.02 - host hv03
2 ssd 1.74660 1.00000 1.7 TiB 1.5 TiB 1.5 TiB 303 KiB 3.7 GiB 287 GiB 83.98 1.21 96 up osd.2
5 ssd 1.74660 0.85004 1.7 TiB 1.4 TiB 1.4 TiB 130 KiB 3.9 GiB 359 GiB 79.90 1.15 95 up osd.5
8 ssd 1.74660 1.00000 1.7 TiB 1.5 TiB 1.5 TiB 123 KiB 3.7 GiB 297 GiB 83.41 1.20 94 up osd.8
11 ssd 1.74660 1.00000 1.7 TiB 1.4 TiB 1.4 TiB 131 KiB 3.7 GiB 363 GiB 79.72 1.15 92 up osd.11
12 ssd 1.74660 1.00000 1.7 TiB 1.4 TiB 1.4 TiB 132 KiB 3.1 GiB 394 GiB 77.98 1.12 94 up osd.12
13 ssd 1.74660 1.00000 1.7 TiB 1.4 TiB 1.4 TiB 121 KiB 3.0 GiB 389 GiB 78.26 1.13 96 up osd.13
20 ssd 1.74660 1.00000 1.7 TiB 201 GiB 200 GiB 0 B 519 MiB 1.6 TiB 11.23 0.16 10 up osd.20
TOTAL 37 TiB 25 TiB 25 TiB 3.2 MiB 68 GiB 11 TiB 69.41
MIN/MAX VAR: 0.16/1.21 STDDEV: 14.37
Code:
# ceph status
cluster:
id: b7565e52-6907-49f9-85b9-526c3ce94676
health: HEALTH_OK
services:
mon: 3 daemons, quorum hv01,hv02,hv03 (age 3M)
mgr: hv01(active, since 3M), standbys: hv02, hv03
mds: 1/1 daemons up, 2 standby
osd: 21 osds: 21 up (since 10m), 21 in (since 11m); 90 remapped pgs
data:
volumes: 1/1 healthy
pools: 4 pools, 577 pgs
objects: 2.21M objects, 8.4 TiB
usage: 25 TiB used, 11 TiB / 37 TiB avail
pgs: 317167/6624987 objects misplaced (4.787%)
487 active+clean
90 active+remapped+backfilling
io:
client: 37 KiB/s rd, 18 MiB/s wr, 2 op/s rd, 299 op/s wr
recovery: 1.3 GiB/s, 333 objects/s
Last edited: