Well, after steps I've made in this thread I've noticed that new nodes can't add OSDs.
At first they were adding OSDs and they showed as "Ghost OSD" which would add them but they won't show in "OSD" tab.
I've made some changes to ceph.conf
And adding OSD in new nodes doesn't successful with following error:
While adding new OSDs in existent nodes is OK.
I guess after partial removal of ceph I've deleted some keyrings and have no idea how to fix it without losing all data. Do you have Ideas how to fix this?
At first they were adding OSDs and they showed as "Ghost OSD" which would add them but they won't show in "OSD" tab.
I've made some changes to ceph.conf
Code:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster network = 192.168.0.100/24
# cluster_network = 192.168.0.100/24
fsid = 7fbb4fea-aa7c-4908-122b-3a84aabz8322
keyring = /etc/pve/priv/$cluster.$name.keyring
mon_allow_pool_delete = true
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public network = 192.168.0.100/24
# public_network = 192.168.0.100/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mds.pve]
host = pve
mds_standby_for_name = pve
[mds.pve02]
host = pve02
[mon.pve]
host = pve
public_addr = 192.168.0.100
mon_addr = 192.168.0.100:6789
[mon.pve01]
host = pve01
public_addr = 192.168.0.101
mon_addr = 192.168.0.102:6789
[mon.pve02]
host = pve02
public_addr = 192.168.0.102
mon_addr = 192.168.0.102:6789
#[mon.pve03]
# host = pve03
# public_addr = 192.168.0.103
# mon_addr = 192.168.0.103:6789
And adding OSD in new nodes doesn't successful with following error:
Code:
create OSD on /dev/sdd (bluestore)
wiping block device /dev/sdd
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.91948 s, 109 MB/s
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a7953e25-0d6a-48be-a99f-6b5c12eacd5a
stderr: 2023-07-30T19:08:03.956+0300 7f17313b06c0 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
stderr: 2023-07-30T19:08:03.956+0300 7f17313b06c0 -1 AuthRegistry(0x7f172c060610) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
stderr: Error EEXIST: entity osd.13 exists but key does not match
--> RuntimeError: Unable to create a new OSD id
TASK ERROR: command 'ceph-volume lvm create --cluster-fsid 7fbb4fea-aa7c-4908-980b-3a84aabf8321 --data /dev/sdd' failed: exit code 1
While adding new OSDs in existent nodes is OK.
I guess after partial removal of ceph I've deleted some keyrings and have no idea how to fix it without losing all data. Do you have Ideas how to fix this?