we have 3 node server in cluster.
but now have 1 node happen bad, and we reinstall this node promox.
but i found old cluster have this bad node Monitors (we keep same old ip).
so now we want delete this bad node Monitors config and renew add.
how to config? tks
monitor filesystem '/var/lib/ceph/mon/ceph-pve21' does not exist on this node (500)
1/3 mons down, quorum pve20,pve22mon.pve21 (rank 1) addr 192.168.100.21:6789/0 is down (out of quorum)
but now have 1 node happen bad, and we reinstall this node promox.
but i found old cluster have this bad node Monitors (we keep same old ip).
so now we want delete this bad node Monitors config and renew add.
how to config? tks
monitor filesystem '/var/lib/ceph/mon/ceph-pve21' does not exist on this node (500)
1/3 mons down, quorum pve20,pve22mon.pve21 (rank 1) addr 192.168.100.21:6789/0 is down (out of quorum)
Code:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
bluestore_block_db_size = 161061273600
bluestore_block_wal_size = 161061273600
cluster network = 192.168.100.0/24
fsid = 3d4c74e0-07ac-4bbb-b270-973a7beaa1c9
keyring = /etc/pve/priv/$cluster.$name.keyring
mon allow pool delete = true
osd journal size = 5120
osd pool default min size = 2
osd pool default size = 3
public network = 192.168.100.0/24
[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring
[mds.pve22]
host = pve22
mds standby for name = pve
[mds.pve20]
host = pve20
mds standby for name = pve
[mds.pve21]
host = pve21
mds standby for name = pve
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.pve20]
host = pve20
mon addr = 192.168.100.20:6789
[mon.pve22]
host = pve22
mon addr = 192.168.100.22:6789
[mon.pve21]
host = pve21
mon addr = 192.168.100.21:6789