While removing a node from cluster and adding a new one I stupidly created a new ceph which somehow affected all nodes.
rbd and ceph.conf were wiped out in /etc/ceph. I have the FSID. There are encrypted OSD (OSD are intact).
/var/lib/ceph is intact but a lot of it was overwritten.
I have the original admin key. I have the original keyring for monitors and the kv_backend. I have the mds original keyring. The OSD lockbox keys seem to be gone or overwritten. I have the original radosgw keyring.
Is there anyway to recover from this? I have backups but not of recent data. If not, no big loss just a huge waste of my time to manually add the data again.
It will not let me create new monitors and when running ceph -s I get the following:
It's likely because the newly generated ceph wasn't finished before I hard restarted the node to prevent further data loss. I did try adding the monitor address to the new ceph.conf though.
rbd and ceph.conf were wiped out in /etc/ceph. I have the FSID. There are encrypted OSD (OSD are intact).
/var/lib/ceph is intact but a lot of it was overwritten.
I have the original admin key. I have the original keyring for monitors and the kv_backend. I have the mds original keyring. The OSD lockbox keys seem to be gone or overwritten. I have the original radosgw keyring.
Is there anyway to recover from this? I have backups but not of recent data. If not, no big loss just a huge waste of my time to manually add the data again.
It will not let me create new monitors and when running ceph -s I get the following:
unable to get monitor info from DNS SRV with service name: ceph-mon
[errno 2] error connecting to the cluster
It's likely because the newly generated ceph wasn't finished before I hard restarted the node to prevent further data loss. I did try adding the monitor address to the new ceph.conf though.
Code:
global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 172.16.3.1/24
fsid = lolnothere
mon allow pool delete = true
osd pool default min size = 1
osd pool default size = 1
public network = 172.16.2.1/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mon]
host = cl-01
mon addr = 172.16.2.1:6789
[ceph-mon]
host = cl-01
mon addr = 172.16.2.1:6789