Ceph: systemctl start ceph-mon fails after moving to Jewel

gdi2k

Active Member
Aug 13, 2016
83
1
28
I followed the tutorial to move Ceph from Hammer to Jewel here:
https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel

All the steps went ok aside from starting the monitor daemon;
Code:
root@smiles2:~# systemctl start ceph-mon@ceph-mon.1.1500178214.095217502.service
root@smiles2:~# systemctl status ceph-mon@ceph-mon.1.1500178214.095217502.service
● ceph-mon@ceph-mon.1.1500178214.095217502.service - Ceph cluster monitor daemon
   Loaded: loaded (/lib/systemd/system/ceph-mon@.service; enabled)
  Drop-In: /lib/systemd/system/ceph-mon@.service.d
           └─ceph-after-pve-cluster.conf
   Active: failed (Result: start-limit) since Sun 2017-07-16 18:52:18 +08; 30s ago
  Process: 3057 ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
 Main PID: 3057 (code=exited, status=1/FAILURE)

Jul 16 18:52:08 smiles2 systemd[1]: Unit ceph-mon@ceph-mon.1.1500178214.095217502.service entered failed state.
Jul 16 18:52:18 smiles2 systemd[1]: ceph-mon@ceph-mon.1.1500178214.095217502.service holdoff time over, sched...start.
Jul 16 18:52:18 smiles2 systemd[1]: Stopping Ceph cluster monitor daemon...
Jul 16 18:52:18 smiles2 systemd[1]: Starting Ceph cluster monitor daemon...
Jul 16 18:52:18 smiles2 systemd[1]: ceph-mon@ceph-mon.1.1500178214.095217502.service start request repeated t...start.
Jul 16 18:52:18 smiles2 systemd[1]: Failed to start Ceph cluster monitor daemon.
Jul 16 18:52:18 smiles2 systemd[1]: Unit ceph-mon@ceph-mon.1.1500178214.095217502.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.

I was able to start it from the Web GUI and continue, and eventually complete the upgrade (including the ceph osd crush tunables step). But when a node is rebooted, the monitor must be started from the GUI manually each time. This affects all 3 nodes that I upgraded.

Any ideas?
 
Hi,

please do what the tutorial tell's you.
You start mon by the old naming schema.

From the upgrade doku
Code:
systemctl start ceph-mon@<MON-ID>.service
systemctl enable ceph-mon@<MON-ID>.service


"ceph-mon.1.1500178214.095217502" is not the mon id
"1" is the mon id.
 
Ah ok, I had misunderstood the docs (tab complete to get <UNIQUE ID>, not <MON-ID>). Works perfectly with

Code:
root@smiles1:~# systemctl start ceph-mon@0.service
root@smiles1:~# systemctl status ceph-mon@0.service
● ceph-mon@0.service - Ceph cluster monitor daemon
   Loaded: loaded (/lib/systemd/system/ceph-mon@.service; enabled)
  Drop-In: /lib/systemd/system/ceph-mon@.service.d
           └─ceph-after-pve-cluster.conf
   Active: active (running) since Mon 2017-07-17 15:59:54 +08; 7s ago
 Main PID: 5951 (ceph-mon)
   CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@0.service
           └─5951 /usr/bin/ceph-mon -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

Jul 17 15:59:54 smiles1 systemd[1]: Started Ceph cluster monitor daemon.
Jul 17 15:59:54 smiles1 ceph-mon[5951]: starting mon.0 rank 0 at 10.15.15.50:6789/0 mon_data /var/lib/ceph/mon...ef05f
Hint: Some lines were ellipsized, use -l to show in full.
root@smiles1:~# systemctl enable ceph-mon@0.service
Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@0.service to /lib/systemd/system/ceph-mon@.service.

Many thanks for your help!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!