finding ceph disk / bluestore relation

brosky

Well-Known Member
Oct 13, 2015
55
4
48
Hi,

I use HDD for ceph OSD and nvme partitioned for bluestore db/wal

I have a disk that won't come up, i've done a destroy (with cleanup option) task so I can add it again however the bluestore partition didn't get cleaned and I can't find it

example server

12 disks, 2 nvme's
disks 1-6 uses 6 partitions on nvme1
disks 7-12 uses 6 partitions on nvme2
like:
sda -> nvme0n1p1
sdb -> nvme0n1p2
...
sdf -> nvme0n1p6
...
sdg -> nvme1n1p1
....
sdl -> nvme1n1p6
but on deployment process sometimes a task didn't get a lock so I had to use another partiotion (and the template is messed)

How can I find a certain OSD uses what DB/WAL partition ?
I'm playing with ceph-volume but I could not untangle the ID's.
 
Maybe this can help you:
Code:
ceph device ls

You have 2 nvmes with 6 partitions and every partition has its own OSD daemon? Do I understand this correctly?
 
Maybe this can help you:
Code:
ceph device ls

You have 2 nvmes with 6 partitions and every partition has its own OSD daemon? Do I understand this correctly?
I have 12 HDD's and 2 NVME's. Each 18TB HDD uses a 164G nvme partition for DB/WAL. One OSD daemon for each HDD.
To split the IOPS load, i've put 6 partitions per nvme, rest of the available space is used for VM's.