unable to add CEPH monitor after a host crash

SlimTom

New Member
Sep 7, 2023
21
0
1
Hi,
I had configuration of 3 Proxmox PVEs running in HA cluster with CEPH. It was working fine until one of the PVE hosts died. I removed it from Cluster and CEPH.
I reinstalled PVE with the same name and IP and add it to the cluster and CEPH. But I have problem adding it to CEPH Monitor. In the list of monitors it does not exist, but when I add it I get error that monitor exist. See pictures. ( I already checked /etc/ceph/ceph.conf )


Thanks in advance
 

Attachments

  • Screenshot_2.png
    Screenshot_2.png
    55.7 KB · Views: 4
Please post the output of the following commands here:

Code:
ceph -s
ceph mon dump
cat /etc/pve/ceph.conf

You already have an Mgr on the node again, have you already set it up again?
 
Code:
root@prox3:~# ceph -s
  cluster:
    id:     253c191e-c4f9-40e8-a35d-997c565bf106
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum prox1,prox2 (age 22h)
    mgr: prox2(active, since 22h), standbys: prox1, prox3
    osd: 3 osds: 3 up (since 107m), 3 in (since 42h)

  data:
    pools:   2 pools, 33 pgs
    objects: 8.21k objects, 32 GiB
    usage:   98 GiB used, 756 GiB / 854 GiB avail
    pgs:     33 active+clean


root@prox3:~# ceph mon dump
epoch 4
fsid 253c191e-c4f9-40e8-a35d-997c565bf106
last_changed 2024-01-03T23:17:59.344150+0100
created 2023-12-28T20:22:14.868041+0100
min_mon_release 18 (reef)
election_strategy: 1
0: [v2:192.168.12.31:3300/0,v1:192.168.12.31:6789/0] mon.prox1
1: [v2:192.168.12.32:3300/0,v1:192.168.12.32:6789/0] mon.prox2
dumped monmap epoch 4


root@prox3:~# cat /etc/pve/ceph.conf
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 192.168.12.31/24
         fsid = 253c191e-c4f9-40e8-a35d-997c565bf106
         mon_allow_pool_delete = true
         mon_host = 192.168.12.31 192.168.12.32 192.168.12.33
         ms_bind_ipv4 = true
         ms_bind_ipv6 = false
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = 192.168.12.31/24

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.prox1]
         public_addr = 192.168.12.31

[mon.prox2]
         public_addr = 192.168.12.32
 
..oh. I see it now !

mon_host = 192.168.12.31 192.168.12.32 192.168.12.33

Eh..
Thank you !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!