Upgraded Ceph Monitor (from Luminous to Nautilus) not starting

mfa2004

Renowned Member
Feb 20, 2009
20
0
66
I upgraded a Proxmox 5.4 cluster with Ceph 12.2 to Nautilus using the instructions provided. It was basically uneventful.

However, after restarting the nodes, I found that the monitor process does not run. I even tried to run it manually thus:

/usr/bin/ceph-mon --debug_mon 10 -f --cluster ceph --id backup2 --setuser ceph --setgroup ceph

on the node "backup2", but it just won't run .... I am getting the following error:

global_init: error reading config file

The ceph.conf file is as follows:

[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.10.10.0/24
debug_asok = 0/0
debug_auth = 0/0
debug_buffer = 0/0
debug_context = 0/0
debug_crush = 0/0
debug_filestore = 0/0
debug_finisher = 0/0
debug_heartbeatmap = 0/0
debug_journal = 0/0
debug_journaler = 0/0
debug_lockdep = 0/0
debug_monc = 0/0
debug_ms = 0/0
debug_objclass = 0/0
debug_optracker = 0/0
debug_osd = 0/0
debug_perfcounter = 0/0
debug_throttle = 0/0
debug_timer = 0/0
debug_tp = 0/0
fsid = 16ca36fa-fd7f-4c25-b91e-a6cbb5521579
leveldb_block_size = 65536
leveldb_cache_size = 536870912
mon allow pool delete = true
ms crc data = true
ms crc header = true
ms type = simple
ms_dispatch_throttle_bytes = 0
ms_tcp_nodelay = true
osd journal size = 5120
osd pool default min size = 2
osd pool default size = 3
osd_disk_thread_ioprio_class = idle
osd_disk_thread_ioprio_priority = 7
osd_max_trimming_pgs = 1
osd_op_num_shards = 10
osd_op_num_threads_per_shard = 2
osd_op_threads = 5
osd_pg_max_concurrent_snap_trims = 1
osd_snap_trim_cost = 4194304
osd_snap_trim_priority = 1
osd_snap_trim_sleep = 0.1
public network = 10.10.10.0/24
mon_host = [10.10.10.132,10.10.10.133,10.10.10.134]

[client]
rbd cache max dirty = 134217728
rbd cache max dirty age = 2
rbd cache size = 268435456
rbd cache target dirty = 134217728
rbd cache writethrough until flush = true
keyring = /etc/pve/priv/$cluster.$name.keyring

[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
osd_enable_op_tracker = false
throttler perf counter = false

[mon.backup2]
host = backup2
mon addr = 10.10.10.132

[mon.backup3]
host = backup3
mon addr = 10.10.10.133

[mon.backup4]
host = backup4
mon addr = 10.10.10.134​

I am hoping that the problem is just a minor configuration issue.

Thanks for the assistance in advance!
 
mon_host = [10.10.10.132,10.10.10.133,10.10.10.134]
That looks odd to me, try:
Code:
mon_host = 10.10.10.132 10.10.10.133 10.10.10.134
 
Thanks, Alwin. I tried that but I still could not get the monitor to start. I went through the configuration line by line, commenting it out until I got the monitors to start. Ultimately, this is the ONLY LINE that I need to comment out to make it work:

ms type = simple​

I am documenting it here to possibly help others who might encounter the same issues.

Thanks everyone!
 
Hi, i get the same error message

"global_init: error reading config file."

when i run the "ceph-mon ... " as user ceph (as recommended in the manual installation instructions.
Could this be related to the file ownership on file ceph.conf?

181729 1 -rw-r----- 1 root www-data 517 Jun 11 14:56 ceph.conf

Running the same command as root doesn't really help as some files are created with wrong ownership.

Thanks for any suggestions
 
181729 1 -rw-r----- 1 root www-data 517 Jun 11 14:56 ceph.conf
works here with the same permissions - the config should get read by root before the daemon changes the UID

check the logs for the reason of the failing startup
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!