Hello,
I've added some more drives to our 3 node ceph cluster, started creating OSDs and accidently created
LVM,CEPH (DB) disk instead of OSD. I do not need a seperate DB disk. How can i destroy it and re-create it to regular OSD?

Actually i did the same mistake on two nodes.
Here's output of lvs on node02:
Here's output of lvs on node03:
I've added some more drives to our 3 node ceph cluster, started creating OSDs and accidently created
LVM,CEPH (DB) disk instead of OSD. I do not need a seperate DB disk. How can i destroy it and re-create it to regular OSD?

Actually i did the same mistake on two nodes.
Here's output of lvs on node02:
Code:
root@node02:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
osd-block-0c9504ae-ad9a-4b2a-98e1-5ed4df87c851 ceph-033f48d3-3a94-4f35-830b-20869ef011c7 -wi-ao---- 894.25g
osd-db-64a63618-e117-46e5-b27f-37ffd16ffb67 ceph-0523f0a5-a21d-4a1e-a125-0537e71f4e63 -wi-ao---- 178.85g
osd-db-8164944d-ddfd-4126-937c-5ecaec16c465 ceph-0523f0a5-a21d-4a1e-a125-0537e71f4e63 -wi-ao---- 178.85g
osd-db-d7a4ef68-6a0d-46cc-86bf-1341552c276e ceph-0523f0a5-a21d-4a1e-a125-0537e71f4e63 -wi-ao---- 178.85g
osd-block-445cf80e-b57a-4c74-b7a8-f9ff17ee8bbf ceph-13b37b57-14f0-4e6f-b11c-672565f32a5d -wi-ao---- <1.75t
osd-block-56d94ac7-f3dd-473f-82f9-87c50d6cab34 ceph-1bc66785-d05c-4805-97d4-c3bd083bca65 -wi-ao---- <1.75t
osd-block-a92621bd-318f-44d4-b16a-67a9b1d5c4c8 ceph-29ca6217-7aa8-4a05-b7af-9c52699b25ce -wi-ao---- 894.25g
osd-block-51dadc0b-cf89-47f3-87f9-186408a0fc00 ceph-4ac82f9b-5bad-4f8c-b94c-89ab8f650dd7 -wi-ao---- 894.25g
osd-block-cf7b065d-e537-4d70-9d39-f13cabaa5943 ceph-83b669ce-2d2f-429c-96a2-e2c052fcabe0 -wi-ao---- 894.25g
osd-block-2fc2e860-f0a8-4bab-9fd5-e3e2f21104da ceph-935425ed-2dd8-4d18-afd8-6aa27537b0b1 -wi-ao---- 894.25g
osd-block-5dfcd6db-e0eb-48ce-8410-69ad5411d7a7 ceph-a04db51c-0d81-43cc-a2e2-eaadbf37c437 -wi-ao---- 894.25g
osd-block-dfbfa857-934a-46f1-bcdc-017122b61869 ceph-aabb36b6-64a0-4f26-a573-4a09b831822d -wi-ao---- 894.25g
osd-block-f8765e20-13f3-4f77-8424-799072bc30be ceph-ba466780-77e2-44c0-8fad-2068b4794d54 -wi-ao---- <1.75t
osd-block-0a1102c8-6718-4bfd-b6e6-2f215e2bbaab ceph-d94bdeb7-4225-4980-a7c1-dbc5bae1da68 -wi-ao---- 894.25g
Here's output of lvs on node03:
Code:
root@node03:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
osd-block-52efcba8-2e98-4354-9993-c1c3c413addd ceph-4b342700-2b24-44e3-958b-12bfc48224ca -wi-ao---- 894.25g
osd-block-e13598a2-4ff2-48fc-b878-3b84d89c8f1c ceph-546109cb-6325-4d35-afb7-7b7259085980 -wi-ao---- 894.25g
osd-block-901edc7d-a228-447a-b6c0-b1cb5a9ffe8c ceph-5fdcb7e7-3098-41e5-9b74-27270549f81d -wi-ao---- 894.25g
osd-db-063b2afe-2810-4fff-bbf4-9db6f1a5b9d7 ceph-8259250e-a95e-4462-a3d7-944e3ff69e59 -wi-ao---- 178.85g
osd-block-7c0d4da4-f97c-4244-a00a-42940a74d38a ceph-8fbe9fe8-97ce-406a-aca2-761fd0f23355 -wi-ao---- 894.25g
osd-block-15a3a490-a689-4433-bdad-afeab5fc1653 ceph-92b26700-eabb-4f42-bec6-de6c85cdb368 -wi-ao---- 894.25g
osd-block-04f67f66-5f9b-44c1-b272-945f5a996478 ceph-ab918c61-a83c-482d-89a1-5ad237e9f946 -wi-ao---- 894.25g
osd-block-72d9bd70-3e5d-4872-9204-de84824ac055 ceph-bfe22cdd-57e1-4015-ada2-fb9b620bce4a -wi-ao---- 894.25g
osd-block-7c5da359-c19f-4efb-95c1-e2e27dd676b8 ceph-ef35af5d-79dc-4a0d-a2ce-8e0af5558a2e -wi-ao---- <1.75t
osd-block-b0cf0322-d99e-4572-9081-35b67a47a7d3 ceph-f1dd8caf-3f5f-4ae3-bfed-46459d7cf987 -wi-ao---- 894.25g
Last edited: