I have been toying around with the Ceph crush map rules. When trying to use the below rule, all my monitors die and the cluster losses quorum. This rule is similar to the one in the post provided by mo_ other than I am trying to do it by datacenter and not rack.
rule dc {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 2 type datacenter
step chooseleaf firstn 2 type host
step emit
}
After lots of trial and error I came up with this rule which is a bit closer but still not quite right. I end up with 2 copies in one datacenter but I would prefer it be the 1st datacenter. Unsure how to work around this.
# rules
rule dc {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 2 type datacenter
step chooseleaf firstn -1 type host
step emit
Im still trying to comprehend how these rules work but I am having a heck of a time wrapping my head around the concept.
Another thing I find interesting is determining what objects are located on what OSD's. This is what I am doing.
Determine the object name.
root@cephnode1:/etc/pve# rados -p Ceph ls | grep vm
rbd_id.vm-101-disk-1
rbd_id.vm-100-disk-1
As you see I have two VM disks. I can then determine the location of those by doing the following.
root@cephnode1:/etc/pve# ceph osd map Ceph rbd_id.vm-100-disk-1
osdmap e359 pool 'Ceph' (5) object 'rbd_id.vm-100-disk-1' -> pg 5.2ef8a3ea (5.ea) -> up ([2,1,3], p2) acting ([2,1,3], p2)
root@cephnode1:/etc/pve# ceph osd map Ceph rbd_id.vm-101-disk-1
osdmap e359 pool 'Ceph' (5) object 'rbd_id.vm-101-disk-1' -> pg 5.512a6f54 (5.54) -> up ([1,0,3], p1) acting ([1,0,3], p1)
What throws me off is the fact that I can do the same for things which are non existant and it still provides an output like it exists.
root@cephnode1:/etc/pve# ceph osd map Ceph rbd_id.vm-104-disk-5
osdmap e359 pool 'Ceph' (5) object 'rbd_id.vm-104-disk-5' -> pg 5.63f06384 (5.84) -> up ([3,5,1], p3) acting ([3,5,1], p3)
vm-104-disk-5 doesn't even exist but yet it provides a mapping like it does. Just odd.