Cant Remove or Add Mons on Ceph

josch12

Well-Known Member
Oct 20, 2017
37
1
48
36
Hi Guys,

i add yesterday 2 new Ceph Nodes and want delete the Mons on the old Nodes now. I Deleate all Mons without one, now i have the Problem i can´t Access the cep via the Gui.

Via Terminal when i Enter " ceph status " i don´t got a Answer

Wehen i Try to add a Mon via WebGui i got this Error " Could not connect to ceph cluster despite configured monitors (500) "

So i Try to add a Mon manual, via Terminal via this Manuel https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/#adding-a-monitor-manual

on ceph auth get mon. -o {tmp}/{key-filename} i don´t got a Answer, and cancel it after round 5 Minutes


I've spent the past few hours reading it and I'm already pretty desperate. Is it possible to delete and re-create all Mons without losing the data of the VPS?

--

cat /etc/ceph/ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.0.0.0/24
fsid = c2d06e78-58d7-4759-96c1-5b5c05080f73
mon_allow_pool_delete = true
mon_host = 91.192.10.10
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 91.192.10.10/24

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.prox20]
public_addr = 91.192.10.10


--

pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 14.2.16-pve1
ceph-fuse: 14.2.16-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.6-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-2
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 

Attachments

  • Screenshot_2021-03-28 prox20 - Proxmox Virtual Environment.png
    Screenshot_2021-03-28 prox20 - Proxmox Virtual Environment.png
    25 KB · Views: 8
Last edited:
You had 3 mons in the cluster but wanted to remove the older nodes on which they were running?

You did add 2 more mons so had a total of 5 mons at some point? Did they show up in the GUI or the ceph -s output?

Or did you delete the 2 old mons before you added the new ones?

Whatever you do, do not delete the last mon you have now!


Anything like authentication problems or such in the logs of the still running monitor? /var/log/ceph/ceph-mon...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!