FSID CLUSTER CEPH

May 16, 2025
1
0
1
FRANCE
hi

big issue there, i've broke my CEPH, so i tried to build it again BUT that gave me an issue : a new FSID(7f541....)
so my OSD doesn't work and my VM too .
can I build a new cluster with the OLD FSID (2a1b2d..) and keep all the DATA ?
PROWMOX VERSION 8.3.3
3 NODES PVE1-2-3
18 SSD

some data:

root@inf-esx-pve1:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 22.35599 root default
-2 22.35599 host inf-esx-pve1
0 ssd 3.72600 osd.0 down 1.00000 1.00000
4 ssd 3.72600 osd.4 down 1.00000 1.00000
5 ssd 3.72600 osd.5 down 1.00000 1.00000
6 ssd 3.72600 osd.6 down 1.00000 1.00000
7 ssd 3.72600 osd.7 down 1.00000 1.00000
14 ssd 3.72600 osd.14 down 0 1.00000
root@inf-esx-pve1:~# ceph-volume lvm list


====== osd.0 =======

[block] /dev/ceph-c47b4197-b326-44a8-b61a-2909c0f5afeb/osd-block-a2b1d2a1-5a30-4675-83ef-1bdcd9a98f72

block device /dev/ceph-c47b4197-b326-44a8-b61a-2909c0f5afeb/osd-block-a2b1d2a1-5a30-4675-83ef-1bdcd9a98f72
block uuid 5wsMl3-3C2h-znwe-EKUe-47of-mV2Z-8yU55K
cephx lockbox secret
cluster fsid 7f541196-8b08-4ea9-bad9-d8fd01494dfe
cluster name ceph
crush device class
encrypted 0
osd fsid a2b1d2a1-5a30-4675-83ef-1bdcd9a98f72
osd id 0
osdspec affinity
type block
vdo 0
devices /dev/sdh

====== osd.14 ======

[block] /dev/ceph-54872c8a-53ed-46cb-998f-b77fd68f0e9f/osd-block-b68cadc7-070a-4151-8553-0aa82b6991c7

block device /dev/ceph-54872c8a-53ed-46cb-998f-b77fd68f0e9f/osd-block-b68cadc7-070a-4151-8553-0aa82b6991c7
block uuid P7uVIK-VDZh-IKh4-UtnL-7iEk-MY6F-ANXpYb
cephx lockbox secret
cluster fsid 7f541196-8b08-4ea9-bad9-d8fd01494dfe
cluster name ceph
crush device class
encrypted 0
osd fsid b68cadc7-070a-4151-8553-0aa82b6991c7
osd id 14
osdspec affinity
type block
vdo 0
devices /dev/sdl

====== osd.4 =======

[block] /dev/ceph-e4d51ba7-54f5-4db0-b9d2-2f02de289520/osd-block-799f428b-02bb-4d33-8e53-c0cb5b9c8ccc

block device /dev/ceph-e4d51ba7-54f5-4db0-b9d2-2f02de289520/osd-block-799f428b-02bb-4d33-8e53-c0cb5b9c8ccc
block uuid HYBKFh-AeIr-T6ls-suMi-TJUh-cX9g-nASp4Q
cephx lockbox secret
cluster fsid 7f541196-8b08-4ea9-bad9-d8fd01494dfe
cluster name ceph
crush device class
encrypted 0
osd fsid 799f428b-02bb-4d33-8e53-c0cb5b9c8ccc
osd id 4
osdspec affinity
type block
vdo 0
devices /dev/sdc

====== osd.5 =======

[block] /dev/ceph-f1c4d362-3fd7-4577-a134-30720b9364b9/osd-block-94409030-b9ca-491b-902f-04a2f3f115e0

block device /dev/ceph-f1c4d362-3fd7-4577-a134-30720b9364b9/osd-block-94409030-b9ca-491b-902f-04a2f3f115e0
block uuid RNevoZ-r2KP-AITr-Wpr9-nCEr-WjoS-TI5VnZ
cephx lockbox secret
cluster fsid 7f541196-8b08-4ea9-bad9-d8fd01494dfe
cluster name ceph
crush device class
encrypted 0
osd fsid 94409030-b9ca-491b-902f-04a2f3f115e0
osd id 5
osdspec affinity
type block
vdo 0
devices /dev/sdd

====== osd.6 =======

[block] /dev/ceph-b34c51af-ba5e-4e44-b9b0-a256ec1bd5f8/osd-block-b5b7bb79-99ae-45b9-baa6-907248875f3b

block device /dev/ceph-b34c51af-ba5e-4e44-b9b0-a256ec1bd5f8/osd-block-b5b7bb79-99ae-45b9-baa6-907248875f3b
block uuid D2QXaw-z1Gq-b6sj-M5Lq-XiBl-CDCq-5fhIHT
cephx lockbox secret
cluster fsid 7f541196-8b08-4ea9-bad9-d8fd01494dfe
cluster name ceph
crush device class
encrypted 0
osd fsid b5b7bb79-99ae-45b9-baa6-907248875f3b
osd id 6
osdspec affinity
type block
vdo 0
devices /dev/sdj

====== osd.7 =======

[block] /dev/ceph-2ff7ce69-11d4-4bf4-8c57-5a8742e968bd/osd-block-a0cf147b-c742-480b-b635-9c0efe04c948

block device /dev/ceph-2ff7ce69-11d4-4bf4-8c57-5a8742e968bd/osd-block-a0cf147b-c742-480b-b635-9c0efe04c948
block uuid GQtxPV-rjbk-QP4m-1uIP-uxsy-y6DD-maIxnj
cephx lockbox secret
cluster fsid 7f541196-8b08-4ea9-bad9-d8fd01494dfe
cluster name ceph
crush device class
encrypted 0
osd fsid a0cf147b-c742-480b-b635-9c0efe04c948
osd id 7
osdspec affinity
type block
vdo 0
devices /dev/sdk
root@inf-esx-pve1:~# ceph -s
cluster:
id: 7f541196-8b08-4ea9-bad9-d8fd01494dfe
health: HEALTH_WARN
mon is allowing insecure global_id reclaim
1 monitors have not enabled msgr2
2 slow ops, oldest one blocked for 18572 sec, mon.inf-esx-pve1 has slow ops

services:
mon: 1 daemons, quorum inf-esx-pve1 (age 5h)
mgr: inf-esx-pve1(active, since 6h)
osd: 6 osds: 0 up, 5 in (since 5h)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
 
can I build a new cluster with the OLD FSID (2a1b2d..) and keep all the DATA ?
This is not possible as the CephX keys for the OSDs and all the other internal config will be missing.

What happened to your cluster?

If you still have all the OSDs you could try to restore the cluster map from the copies stored there and start a new MON with that.
Read the Ceph documentation for more details.