Dear all
Yesterday I was installing a new ceph cluster in proxmox with 3 nodes.
Doing this I wanted to seperate the SSD disks from HDD disks.
- Therefor I created 2 new roots (1 for SSD OSD's, 1for HDD OSD's) with 3 nodes in every root.
- I added my SSD OSD's to the appropriate root and created a pool of SSD storage targetting those buckets with the appropriate crush rules.
- Did the same with all HDD disks
afterwards ceph osd crush tree showed me perfectly my configuration, as did the GUI.
I installed some VM's on it with windows server and pfsense and all went well.
As this cluster is still being configured and I was planning to put it in production tomorrow, I shut all VM's and shutdown all nodes of the cluster.
(yeah I try to do some powersavings as well
)
Today I started all nodes again and I wanted to proceed my configurations, but I saw my VM's wouldn't start.
In the and I can see all my OSD's are back in default root and therefor I get troubles with my pools ofcourse...
you can see my current crush tree here:
OSD.0 should be in node1-ssd, osd.1 in node2-ssd,...
OSD.3 & .4 should be in node1-hdd, OSD.4 & .6 should be in node2-hdd,...
Can someone help me out why my config of my OSD's got lost and how to prevent it in the future?
how I can save the progress I made?
Thanks in advance!
Yesterday I was installing a new ceph cluster in proxmox with 3 nodes.
Doing this I wanted to seperate the SSD disks from HDD disks.
- Therefor I created 2 new roots (1 for SSD OSD's, 1for HDD OSD's) with 3 nodes in every root.
- I added my SSD OSD's to the appropriate root and created a pool of SSD storage targetting those buckets with the appropriate crush rules.
- Did the same with all HDD disks
afterwards ceph osd crush tree showed me perfectly my configuration, as did the GUI.
I installed some VM's on it with windows server and pfsense and all went well.
As this cluster is still being configured and I was planning to put it in production tomorrow, I shut all VM's and shutdown all nodes of the cluster.
(yeah I try to do some powersavings as well

Today I started all nodes again and I wanted to proceed my configurations, but I saw my VM's wouldn't start.
In the and I can see all my OSD's are back in default root and therefor I get troubles with my pools ofcourse...
you can see my current crush tree here:

OSD.0 should be in node1-ssd, osd.1 in node2-ssd,...
OSD.3 & .4 should be in node1-hdd, OSD.4 & .6 should be in node2-hdd,...
Can someone help me out why my config of my OSD's got lost and how to prevent it in the future?
how I can save the progress I made?
Thanks in advance!