So currently I have 3 nodes, 3x16x18TB HDDs, in a Ceph cluster running normally. Today i went to go add 2 more nodes with 2x12x12TB drives.
All was fine untill I went to go add the OSDs into ceph. I set the NoRecover flag but forgot to set the Noout and the Norebalance flag.
The cluster failed to serve any data, all VM's currently running on the cluster nodes stopped. the OSD's on the new node showed offline and would not start.
I turned off the new node that had OSD's created on it, the cluster went back to normal.
Did not setting the Noout flag and Norebalance really have that much of a performance hit on the cluster?
Now my question is, the node with the OSDs created on it, is currently off. If i set the
osd max backfills = 1
osd recovery max active = 1
and turn the node back on, do you think it will be fine?
All was fine untill I went to go add the OSDs into ceph. I set the NoRecover flag but forgot to set the Noout and the Norebalance flag.
The cluster failed to serve any data, all VM's currently running on the cluster nodes stopped. the OSD's on the new node showed offline and would not start.
I turned off the new node that had OSD's created on it, the cluster went back to normal.
Did not setting the Noout flag and Norebalance really have that much of a performance hit on the cluster?
Now my question is, the node with the OSDs created on it, is currently off. If i set the
osd max backfills = 1
osd recovery max active = 1
and turn the node back on, do you think it will be fine?
Last edited: