I use a 3-node cluster set up with Ceph. Over the weekend node 3's system disk (SSD, no RAID) failed. I replaced the disk, removed it from the cluster, re-added it per the instructions and all is well - the cluster is complete again.
Now I'm having trouble with ceph. I removed all the OSDs and monitor (from command line) and am now trying to add them back. The problem is that I get:
My ceph.conf file looks like this:
What's the best approach for resolving this? I could just remove mon.2 from the config file, but I don't know what repercussions that would have.
Now I'm having trouble with ceph. I removed all the OSDs and monitor (from command line) and am now trying to add them back. The problem is that I get:
Code:
root@smiles3:~# pveceph createmon --mon-address 10.15.15.52
monitor address '10.15.15.52:6789' already in use by 'mon.2'
root@smiles3:~# ceph mon remove mon.2
mon.mon.2 does not exist or has already been removed
My ceph.conf file looks like this:
Code:
root@smiles3:~# cat /etc/ceph/ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.15.15.0/24
filestore xattr use omap = true
fsid = ab9b66eb-4363-4fca-85dd-e67e47aef05f
keyring = /etc/pve/priv/$cluster.$name.keyring
mon allow pool delete = true
osd journal size = 5120
osd pool default min size = 1
public network = 10.15.15.0/24
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.1]
host = smiles2
mon addr = 10.15.15.51:6789
[mon.2]
host = smiles3
mon addr = 10.15.15.52:6789
[mon.0]
host = smiles1
mon addr = 10.15.15.50:6789
What's the best approach for resolving this? I could just remove mon.2 from the config file, but I don't know what repercussions that would have.