Ceph Cluster Down\Sorta

jpalcic

New Member
Mar 15, 2023
3
0
1
So currently I have 3 nodes, 3x16x18TB HDDs, in a Ceph cluster running normally. Today i went to go add 2 more nodes with 2x12x12TB drives.

All was fine untill I went to go add the OSDs into ceph. I set the NoRecover flag but forgot to set the Noout and the Norebalance flag.

The cluster failed to serve any data, all VM's currently running on the cluster nodes stopped. the OSD's on the new node showed offline and would not start.

I turned off the new node that had OSD's created on it, the cluster went back to normal.

Did not setting the Noout flag and Norebalance really have that much of a performance hit on the cluster?

Now my question is, the node with the OSDs created on it, is currently off. If i set the
osd max backfills = 1
osd recovery max active = 1

and turn the node back on, do you think it will be fine?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!