Hi,
I have created a custom crushmap with different root-entrys for different hdd's /ssd's like described here
http://docs.ceph.com/docs/master/ra...ap/#placing-different-pools-on-different-osds
To prevent ceph from putting all devices back to "default-root" at boot i placed
" osd crush update on start = false"
to /etc/pve/ceph.conf
This worked good.
Now after latest updates, i found osd.11 & osd.12 (ssds from "root-ssd") in the default-root-location.
So it seems
"osd crush update on start = false"
is ignored now
I think a start-script has changed.
Do i miss something or is it worth to make a bug-report?
Thank you four your thoughts!
Markus
I have created a custom crushmap with different root-entrys for different hdd's /ssd's like described here
http://docs.ceph.com/docs/master/ra...ap/#placing-different-pools-on-different-osds
Code:
ceph osd crush tree
[
{
"id": -11,
"name": "hdd-root",
"type": "root",
"type_id": 11,
"items": []
},
{
"id": -2,
"name": "ssd-root",
"type": "root",
"type_id": 11,
"items": [
{
"id": -13,
"name": "virt01-ssd",
"type": "host",
"type_id": 2,
"items": [
{
"id": 11,
"name": "osd.11",
"type": "osd",
"type_id": 0,
"crush_weight": 1.000000,
"depth": 2
}
]
},
{
"id": -14,
"name": "virt02-ssd",
"type": "host",
"type_id": 2,
"items": [
{
"id": 12,
"name": "osd.12",
"type": "osd",
"type_id": 0,
"crush_weight": 1.000000,
"depth": 2
}
]
},
{
"id": -3,
"name": "storage01-ssd",
"type": "host",
"type_id": 2,
"items": []
}
]
},
{
"id": -1,
"name": "default",
"type": "root",
"type_id": 11,
"items": [
{
"id": -4,
"name": "storage01",
"type": "host",
"type_id": 2,
"items": [
{
"id": 14,
"name": "osd.14",
"type": "osd",
"type_id": 0,
"crush_weight": 0.899994,
"depth": 2
},
{
"id": 15,
"name": "osd.15",
"type": "osd",
"type_id": 0,
"crush_weight": 0.899994,
"depth": 2
},
{
"id": 16,
"name": "osd.16",
"type": "osd",
"type_id": 0,
"crush_weight": 0.899994,
"depth": 2
},
{
"id": 17,
"name": "osd.17",
"type": "osd",
"type_id": 0,
"crush_weight": 0.899994,
"depth": 2
},
{
"id": 18,
"name": "osd.18",
"type": "osd",
"type_id": 0,
"crush_weight": 0.899994,
"depth": 2
},
{
"id": 19,
"name": "osd.19",
"type": "osd",
"type_id": 0,
"crush_weight": 0.899994,
"depth": 2
},
{
"id": 20,
"name": "osd.20",
"type": "osd",
"type_id": 0,
"crush_weight": 0.899994,
"depth": 2
}
]
},
{
"id": -5,
"name": "virt02",
"type": "host",
"type_id": 2,
"items": [
{
"id": 0,
"name": "osd.0",
"type": "osd",
"type_id": 0,
"crush_weight": 0.907990,
"depth": 2
},
{
"id": 2,
"name": "osd.2",
"type": "osd",
"type_id": 0,
"crush_weight": 0.907990,
"depth": 2
},
{
"id": 1,
"name": "osd.1",
"type": "osd",
"type_id": 0,
"crush_weight": 0.907990,
"depth": 2
},
{
"id": 3,
"name": "osd.3",
"type": "osd",
"type_id": 0,
"crush_weight": 0.907990,
"depth": 2
},
{
"id": 4,
"name": "osd.4",
"type": "osd",
"type_id": 0,
"crush_weight": 0.907990,
"depth": 2
}
]
},
{
"id": -6,
"name": "virt01",
"type": "host",
"type_id": 2,
"items": [
{
"id": 5,
"name": "osd.5",
"type": "osd",
"type_id": 0,
"crush_weight": 0.907990,
"depth": 2
},
{
"id": 6,
"name": "osd.6",
"type": "osd",
"type_id": 0,
"crush_weight": 0.907990,
"depth": 2
},
{
"id": 7,
"name": "osd.7",
"type": "osd",
"type_id": 0,
"crush_weight": 0.907990,
"depth": 2
},
{
"id": 8,
"name": "osd.8",
"type": "osd",
"type_id": 0,
"crush_weight": 0.907990,
"depth": 2
},
{
"id": 9,
"name": "osd.9",
"type": "osd",
"type_id": 0,
"crush_weight": 0.907990,
"depth": 2
}
]
}
]
}
]
" osd crush update on start = false"
to /etc/pve/ceph.conf
This worked good.
Now after latest updates, i found osd.11 & osd.12 (ssds from "root-ssd") in the default-root-location.
So it seems
"osd crush update on start = false"
is ignored now
I think a start-script has changed.
Do i miss something or is it worth to make a bug-report?
Thank you four your thoughts!
Markus