Hello everyone,
I've set up a highly available hyper-converged Proxmox 7.4-3 cluster with Ceph Quincy (17.2.5), featuring ten nodes, with the first three as monitors, and only the first node acting as a Ceph Manager. Each node has two OSDs. There are two pools in Ceph, each linked to one OSD on every node. Virtual machines run on these nodes utilizing the pools. The cluster has been running smoothly until today when the first node went down due to a boot disk issue. Consequently, the cluster is currently without a Ceph Manager, at least temporarily.
Is it possible to install the Ceph Manager on a new node? Should I do so? If yes, what would be the best way to proceed, considering the cluster is in production? Thank you!
I've set up a highly available hyper-converged Proxmox 7.4-3 cluster with Ceph Quincy (17.2.5), featuring ten nodes, with the first three as monitors, and only the first node acting as a Ceph Manager. Each node has two OSDs. There are two pools in Ceph, each linked to one OSD on every node. Virtual machines run on these nodes utilizing the pools. The cluster has been running smoothly until today when the first node went down due to a boot disk issue. Consequently, the cluster is currently without a Ceph Manager, at least temporarily.
Is it possible to install the Ceph Manager on a new node? Should I do so? If yes, what would be the best way to proceed, considering the cluster is in production? Thank you!