Crush map configuration not persistant

Bruggefan

New Member
Feb 13, 2024
2
0
1
Dear all

Yesterday I was installing a new ceph cluster in proxmox with 3 nodes.
Doing this I wanted to seperate the SSD disks from HDD disks.

- Therefor I created 2 new roots (1 for SSD OSD's, 1for HDD OSD's) with 3 nodes in every root.

- I added my SSD OSD's to the appropriate root and created a pool of SSD storage targetting those buckets with the appropriate crush rules.

- Did the same with all HDD disks


afterwards ceph osd crush tree showed me perfectly my configuration, as did the GUI.
I installed some VM's on it with windows server and pfsense and all went well.

As this cluster is still being configured and I was planning to put it in production tomorrow, I shut all VM's and shutdown all nodes of the cluster.
(yeah I try to do some powersavings as well ;) )
Today I started all nodes again and I wanted to proceed my configurations, but I saw my VM's wouldn't start.
In the and I can see all my OSD's are back in default root and therefor I get troubles with my pools ofcourse...


you can see my current crush tree here:1707834750542.png
OSD.0 should be in node1-ssd, osd.1 in node2-ssd,...
OSD.3 & .4 should be in node1-hdd, OSD.4 & .6 should be in node2-hdd,...

Can someone help me out why my config of my OSD's got lost and how to prevent it in the future?
how I can save the progress I made?

Thanks in advance!
 
I dont know why it happend but im not sure, if pve ceph does allow multiple "roots"? in your case just use the defaults, use device classes in the osd-creation (via web-ui) but make sure you create a custom crushrule that applies to the devices used. After that, create pools and asign the correct crush-rule. This post will help you out:

https://forum.proxmox.com/threads/ceph-classes-rules-and-pools.60013/
 
  • Like
Reactions: gurubert
Thank you for your answers.

I did try to use it the way you both mentioned. The result of this approach was a SSD-pool of approximately 1TB and a HDD pool of also approximately 1TB.
Which was obviously not what I was trying to accomplish. Hence you know why I was googling around and came to the solution I have now...
If Proxmox is overriding settings, it is probably possible to change a config-file to make this a permanent configuration?

Thanks in advance!
 
Thank you for your answers.

I did try to use it the way you both mentioned. The result of this approach was a SSD-pool of approximately 1TB and a HDD pool of also approximately 1TB.
Which was obviously not what I was trying to accomplish. Hence you know why I was googling around and came to the solution I have now...
If Proxmox is overriding settings, it is probably possible to change a config-file to make this a permanent configuration?

Thanks in advance!
Im not sure what you actually did but I can confirm that the way I posted is absolutely correct, and also the fact that you only have 1TB SSD then. How do you get to the assumption it should NOT be 1TB? You only have 1 ssd per host which is 1TB, Ceph uses 3way replica. So you can only use a third of the sum of all the disks you have in a device-class (like ssd).

So whatever you have configured Im sure, theres no official (Proxmox) way to make this permanent and Im 99% sure that your setup has some big flaws which might cause trouble in the future.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!