Ceph crush retention after reboot

Spiros Pap

Well-Known Member
Aug 1, 2017
83
0
46
44
Hi all,

I have a problem with my ceph installation, where my crush table changes after a reboot.
I have configured my ceph crush like this (i am giving the relevant config):
host px1 {
item osd.1 weight 1.637
item osd.2 weight 1.637
item osd.3 weight 1.637
item osd.4 weight 1.637
}
host px1-ssd {
item osd.6 weight 0.830
item osd.7 weight 0.830
item osd.5 weight 0.830
}

which means that a have created two hosts, where px1 contains the SAS disks and px1-ssd contains the SSD disks. As you can guess all disks in px1,px1-ssd belong to the same physical host. This config works for me, in general. The problem is that when I reboot the host, all OSDs in px1-ssd, go under px1, like this:
host px1 {
item osd.1 weight 1.637
item osd.2 weight 1.637
item osd.3 weight 1.637
item osd.4 weight 1.637
item osd.6 weight 0.830
item osd.7 weight 0.830
item osd.5 weight 0.830
}
host px1-ssd {
}

'px1' is the name of the host.

Is that normal behaviour? I want my crush to stay as it is and not change, on every reboot....
How can i fix this?

Thanx,
Sp
 
Hi,

Yes, all nodes, report the same crush file. When i make a change, it is updated to all hosts (as far as i can tell by getting the crushmap from each host). The nodes are in a cluster and I guess the crush is distributed across all nodes.

I'll see into device classes.

Sp
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!