Ceph config broken

joshbgosh10592

New Member
Jan 29, 2019
25
0
1
Hi! So, I started to install and configure Ceph before I fully knew how I wanted to have it configured (mistake #1).
Now, after following https://forum.proxmox.com/threads/changing-ceph-public-network.33083/ I've made some changes, but they're now really incorrect to the point where "monmaptool --print tmpfile returns "couldn't open tmpfile: (2) No such file or directory".. Attempting to edit in the WebUI returns "(500) got timeout" and starting the monitor shows success, but on webpage reload, shows quorum, no.
I have the IP addresses I now want and that config will not change, but how do I get this back up and running? I have nothing in the pools yet, as this is the first I'm I finally have the separate subnet working...

Code:
root@PVE-1:~# monmaptool --print tmpfile
monmaptool: monmap file tmpfile
epoch 9
fsid 7770d4e7-3305-4ca8-b780-508825023a70
last_changed 2019-05-07 21:52:22.105900
created 2019-04-24 22:05:40.902816
0: 10.9.220.1:6789/0 mon.PVE-1
1: 10.9.220.2:6789/0 mon.PVE-2
2: 10.9.220.49:6789/0 mon.PVE-Witness
3: 172.16.0.1:6789/0 mon.0
4: 172.16.0.2:6789/0 mon.1
5: 172.16.0.254:6789/0 mon.2
/etc/pve/ceph.conf and /etc/ceph/ceph.conf show:

Code:
[global]
         auth client required = cephx
         auth cluster required = cephx
         auth service required = cephx
         cluster network = 172.16.0.0/24
         fsid = 7770d4e7-3305-4ca8-b780-508825023a70
         keyring = /etc/pve/priv/$cluster.$name.keyring
         mon allow pool delete = true
         osd journal size = 5120
         osd pool default min size = 2
         osd pool default size = 2
         public network = 172.16.0.0/24

[osd]
         keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.PVE-Witness]
         host = PVE-Witness
         mon addr = 172.16.0.254:6789

[mon.PVE-1]
         host = PVE-1
         mon addr = 172.16.0.1:6789
Edit: I just did the most recent Proxmox update to all three nodes, and "ceph mon getmap -o tmpfile" now hangs for a while again and eventually returns "error (110) connection timed out. [errno 110] error connecting to the cluster
 
Last edited:

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,954
369
88
That's old and for changing the IP of existing MONs not needed.

0: 10.9.220.1:6789/0 mon.PVE-1 1: 10.9.220.2:6789/0 mon.PVE-2 2: 10.9.220.49:6789/0 mon.PVE-Witness 3: 172.16.0.1:6789/0 mon.0 4: 172.16.0.2:6789/0 mon.1 5: 172.16.0.254:6789/0 mon.2
You ended up with 6x MONs in the DB and the MONs 3-5 are with the new IPs (but they don't exist).

Stopp all Ceph services and remove the non-existing MONs. Then it may be possible to restart the existing MONs with the new IP.
http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#removing-monitors
 

joshbgosh10592

New Member
Jan 29, 2019
25
0
1
That's old and for changing the IP of existing MONs not needed.


You ended up with 6x MONs in the DB and the MONs 3-5 are with the new IPs (but they don't exist).

Stopp all Ceph services and remove the non-existing MONs. Then it may be possible to restart the existing MONs with the new IP.
http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#removing-monitors
Thank you for that. I've tried following the directions, but I don't know where my "mommap" file is. Any time I've looked at the config, it's just been "tmpfile", and "find / -name monfile" returns only: find: ‘/var/lib/lxcfs/cgroup/blkio/system.slice/pvesr.service’: No such file or directory

any advice?
 

joshbgosh10592

New Member
Jan 29, 2019
25
0
1
I don't see anything in there to see where Proxmox created it originally (I used the WebUI to install Ceph).
Trying to remove a monitor (manual) fails with a timeout, and "removing monitors for an unhealthy cluster" fails at step 3 on PVE-1 with:
Code:
root@PVE-1:~# ceph-mon -i PVE-2 --extract-monmap /tmp/monmap
2019-05-14 15:20:54.657026 7fa9d500c100 -1 monitor data directory at '/var/lib/ceph/mon/ceph-PVE-2' does not exist: have you run 'mkfs'?
root@PVE-1:~# ceph-mon -i PVE-Witness --extract-monmap /tmp/monmap
2019-05-14 15:21:01.446710 7fa04daf2100 -1 monitor data directory at '/var/lib/ceph/mon/ceph-PVE-Witness' does not exist: have you run 'mkfs'?
root@PVE-1:~# ceph-mon -i PVE-1 --extract-monmap /tmp/monmap
2019-05-14 15:21:04.620638 7f076c601100 -1 rocksdb: IO error: lock /var/lib/ceph/mon/ceph-PVE-1/store.db/LOCK: Resource temporarily unavailable
2019-05-14 15:21:04.620656 7f076c601100 -1 error opening mon data directory at '/var/lib/ceph/mon/ceph-PVE-1': (22) Invalid argument
root@PVE-1:~#
From PVE-2:
PVE-2 with:
Code:
root@PVE-2:~# ceph-mon -i PVE-2 --extract-monmap tmpfile
2019-05-14 15:22:02.020778 7fc98bbd5100 -1 monitor data directory at '/var/lib/ceph/mon/ceph-PVE-2' does not exist: have you run 'mkfs'?
root@PVE-2:~# ceph-mon -i PVE-1 --extract-monmap tmpfile
2019-05-14 15:22:08.789850 7ff537d94100 -1 monitor data directory at '/var/lib/ceph/mon/ceph-PVE-1' does not exist: have you run 'mkfs'?
root@PVE-2:~# ceph-mon -i PVE-Witness --extract-monmap tmpfile
2019-05-14 15:22:13.856843 7fe871047100 -1 monitor data directory at '/var/lib/ceph/mon/ceph-PVE-Witness' does not exist: have you run 'mkfs'?
And from PVE-Witness:
Code:
root@PVE-Witness:~# ceph-mon -i PVE-1 --extract-monmap tmpfile
2019-05-14 15:24:28.759430 7fbac43ff100 -1 monitor data directory at '/var/lib/ceph/mon/ceph-PVE-1' does not exist: have you run 'mkfs'?
root@PVE-Witness:~# ceph-mon -i PVE-2 --extract-monmap tmpfile
2019-05-14 15:24:37.987937 7f1ef25e0100 -1 monitor data directory at '/var/lib/ceph/mon/ceph-PVE-2' does not exist: have you run 'mkfs'?
root@PVE-Witness:~# ceph-mon -i PVE-Witness --extract-monmap tmpfile
2019-05-14 15:24:44.132602 7f85785dd100 -1 rocksdb: IO error: lock /var/lib/ceph/mon/ceph-PVE-Witness/store.db/LOCK: Resource temporarily unavailable
2019-05-14 15:24:44.132611 7f85785dd100 -1 error opening mon data directory at '/var/lib/ceph/mon/ceph-PVE-Witness': (22) Invalid argument
I've used both /tmp/monmap and just tmpfile, both return the same results as above. I'm totally fine with completely redoing the config if needed, since I have nothing stored in my Ceph cluster yet. But I do have plenty of VMs that I need to hang onto, so I can't just reinstall Proxmox on the three nodes.
 

joshbgosh10592

New Member
Jan 29, 2019
25
0
1
So I was finally able to remove the monitors so now "monmap --print tmpfile" returns with no monitors. Even though the webUI still returns PVE-1 and PVE-Witness.....
However, trying to "ceph-mon -i PVE-1 --inject-monmap tmpfile" returns with OI error: lock /var/lib/ceph/mon/ceph-PVE-1/store.db/LOCK: resource temporarily unavailable"..
ceph mon getmap -o tmpfile still returns with timeout. I'm assuming the .db file is screwed. Can that be recreated? If so, how..?
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,954
369
88
So I was finally able to remove the monitors so now "monmap --print tmpfile" returns with no monitors. Even though the webUI still returns PVE-1 and PVE-Witness.....
These are read from the ceph.conf file, irrespective of their state.

However, trying to "ceph-mon -i PVE-1 --inject-monmap tmpfile" returns with OI error: lock /var/lib/ceph/mon/ceph-PVE-1/store.db/LOCK: resource temporarily unavailable"..
Once the service is running, the db is locked.

ceph mon getmap -o tmpfile still returns with timeout. I'm assuming the .db file is screwed. Can that be recreated? If so, how..?
If the ceph services are running and 'ceph status' returns the state of the cluster, then things work.

Besides this, I suggest to learn more about Ceph's workings, to make troubleshooting easier. Please see the link below, it contains hopefully useful links.
https://forum.proxmox.com/threads/ceph-raw-usage-grows-by-itself.38395/#post-189842
 

joshbgosh10592

New Member
Jan 29, 2019
25
0
1
These are read from the ceph.conf file, irrespective of their state.


Once the service is running, the db is locked.


If the ceph services are running and 'ceph status' returns the state of the cluster, then things work.

Besides this, I suggest to learn more about Ceph's workings, to make troubleshooting easier. Please see the link below, it contains hopefully useful links.
https://forum.proxmox.com/threads/ceph-raw-usage-grows-by-itself.38395/#post-189842
I ran "service ceph stop mon || stop ceph-mon-all" multiple times to make sure the service didn't start. It would make sense that the database would be locked because the service was running, but in this case, I told it to stop.
"ceph status" hangs the console on all three nodes and eventually returns:
Code:
root@PVE-1:~# ceph status
2019-05-16 15:23:10.040346 7fc67ccf0700  0 monclient(hunting): authenticate timed out after 300
2019-05-16 15:23:10.040391 7fc67ccf0700  0 librados: client.admin authentication error (110) Connection timed out
[errno 110] error connecting to the cluster
I've been reading on Ceph, but really haven't seen how to bring it out of such a broken state. I'll definitely be reading more into it while I'm re-configuring so I don't have to go through any of this again.
Thank you so far though, I really do appreciate this.
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,954
369
88
"service ceph stop mon || stop ceph-mon-all"
These are just helpers for the actual systemd command. Please check with systemctl if the service is down and best also if any process might still exist that holds the DB.

Depending in what state it is in, it might be an old lock leftover.
 

joshbgosh10592

New Member
Jan 29, 2019
25
0
1
Ah, that makes sense then.
"systemctl status ceph" returns
Code:
root@PVE-1:~# systemctl status ceph
● ceph.service - PVE activate Ceph OSD disks
   Loaded: loaded (/etc/systemd/system/ceph.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Mon 2019-05-13 22:17:53 EDT; 3 days ago
  Process: 2809585 ExecStart=/usr/sbin/ceph-disk --log-stdout activate-all (code=exited, status=0/SUCCESS)
 Main PID: 2809585 (code=exited, status=0/SUCCESS)
      CPU: 558ms

May 13 22:17:52 PVE-1 systemd[1]: Starting PVE activate Ceph OSD disks...
May 13 22:17:53 PVE-1 systemd[1]: Started PVE activate Ceph OSD disks.
Since it looks like the old lock is just left, am I able to just delete the "/var/lib/ceph/mon/ceph-PVE-1/store.db/LOCK" to unlock the DB, our would that make things even worse somehow?
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,954
369
88
● ceph.service - PVE activate Ceph OSD disks
The ceph.service is used during boot or if you want to start/stop all ceph services at once.

Since it looks like the old lock is just left, am I able to just delete the "/var/lib/ceph/mon/ceph-PVE-1/store.db/LOCK" to unlock the DB, our would that make things even worse somehow?
If no services is holding the lock, then you should be able to remove it.
 

joshbgosh10592

New Member
Jan 29, 2019
25
0
1
The LOCK file was able to be deleted, and now trying inject:
Code:
root@PVE-1:/var/lib/ceph/mon/ceph-PVE-1/store.db# ceph-mon -i PVE-1 --inject-monmap tmpfile
2019-05-21 22:38:49.287841 7f9d2c9de100 -1 unable to read monmap from tmpfile: can't open tmpfile: (2) No such file or directory
And:
Code:
root@PVE-1:/var/lib/ceph/mon/ceph-PVE-1/store.db# monmaptool --print tmpfile
monmaptool: monmap file tmpfile
monmaptool: couldn't open tmpfile: (2) No such file or directory
What I don't understand is that the exact command of "monmaptool --print tmpfile" was working before I deleted the LOCK. Did my deleting the lock free up the DB and now Ceph doesn't have a tmpfile?
Also, the service now won't start and I can't seem to find the log file to determine why....
Code:
root@PVE-1:/var/lib/ceph/mon/ceph-PVE-1/store.db# systemctl start ceph.service
root@PVE-1:/var/lib/ceph/mon/ceph-PVE-1/store.db# systemctl status ceph.service
● ceph.service - PVE activate Ceph OSD disks
   Loaded: loaded (/etc/systemd/system/ceph.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Tue 2019-05-21 22:49:29 EDT; 1s ago
  Process: 682662 ExecStart=/usr/sbin/ceph-disk --log-stdout activate-all (code=exited, status=0/SUCCESS)
 Main PID: 682662 (code=exited, status=0/SUCCESS)
      CPU: 553ms

May 21 22:49:28 PVE-1 systemd[1]: Starting PVE activate Ceph OSD disks...
May 21 22:49:29 PVE-1 systemd[1]: Started PVE activate Ceph OSD disks.
root@PVE-1:/var/lib/ceph/mon/ceph-PVE-1/store.db#

How can I just completely blow up my Ceph config and start it fresh? I feel like that's the only proper way to get my Ceph cluster working...
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,954
369
88
root@PVE-1:/var/lib/ceph/mon/ceph-PVE-1/store.db# monmaptool --print tmpfile monmaptool: monmap file tmpfile monmaptool: couldn't open tmpfile: (2) No such file or directory
Well, the clue here is the 'couldn't open tmpfile: (2) No such file or directory', you changed the directory and there is no 'tmpfile' file. I suppose it is still in the home directory of the root user.
 

joshbgosh10592

New Member
Jan 29, 2019
25
0
1
Well, I'm embarrassed about that part. I even did a search for it and didn't find it previously.
So, I'm able to inject now, but
Code:
root@PVE-1:~# ceph mon getmap -o tmpfile
still returns timeout. What else do I need to get this to a healthy state?
Does ceph.service need to be running for that command to actually work? The service still isn't starting..
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,954
369
88
Did you start the ceph-mon on this node? And then use '-m' on the ceph command to specify the IP of the MON, to directly talk to it, as otherwise it tries to reach the first known one.
 

joshbgosh10592

New Member
Jan 29, 2019
25
0
1
By "start the ceph-mon on this node" i'm assuming "service ceph start mon"?
I did that, and attempted to "systemctl start ceph.service" same thing.
Assuming the '-m' on the ceph command to specify the IP of the MON needs to be done before I can start the ceph.service, but I'm not sure which ceph command you're talking about. I ran "ceph -m PVE-1" and it returned
Code:
root@PVE-1:~# ceph -m PVE-1
2019-05-29 22:15:09.830257 7f1e9af5f700  0 monclient(hunting): authenticate timed out after 300
2019-05-29 22:15:09.830314 7f1e9af5f700  0 librados: client.admin authentication error (110) Connection timed out
[errno 110] error connecting to the cluster
I went through the steps of adding the monitors to the tmpfile from here: http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/ but using the steps involving "the messy way" since my cluster is a mess anyway. The tmpfile looks correct,
Code:
root@PVE-1:~# monmaptool --print tmpfile
monmaptool: monmap file tmpfile
epoch 9
fsid 7770d4e7-3305-4ca8-b780-508825023a70
last_changed 2019-05-07 21:52:22.105900
created 2019-04-24 22:05:40.902816
0: 172.16.0.1:6789/0 mon.PVE-1
1: 172.16.0.2:6789/0 mon.PVE-2
2: 172.16.0.254:6789/0 mon.PVE-Witness
and I didn't receive any errors when I
Code:
root@PVE-1:~# ceph-mon -i PVE-1 --inject-monmap tmpfile
So I'm assuming it went well, but still fails to start
Code:
root@PVE-1:~# systemctl start ceph.service
root@PVE-1:~# systemctl status ceph.service
● ceph.service - PVE activate Ceph OSD disks
   Loaded: loaded (/etc/systemd/system/ceph.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Wed 2019-05-29 21:25:36 EDT; 6s ago
  Process: 810528 ExecStart=/usr/sbin/ceph-disk --log-stdout activate-all (code=exited, status=0/SUCCESS)
 Main PID: 810528 (code=exited, status=0/SUCCESS)
      CPU: 555ms
I feel I'm missing something simple now. Like, should this sync to the other nodes once the service starts, or do I need to populate the tmpfile and inject on each node?
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,954
369
88
I feel I'm missing something simple now. Like, should this sync to the other nodes once the service starts, or do I need to populate the tmpfile and inject on each node?
Either you have only one MON remaining and do the changes or the need to be done on all MONs before they are started.
 

joshbgosh10592

New Member
Jan 29, 2019
25
0
1
I was able to inject the tmpfile on PVE-1 and PVE-Witness (I had to delete /var/lib/ceph/mon/ceph-PVE-Witness/store.db/LOCK) but it completed for those two.
However for PVE-2, /var/lib/ceph/mon/ceph-PVE-2/store.db doesn't exist, and I didn't delete it...
I created the directories, and it still failed with this.

Code:
root@PVE-2:~# ceph-mon -i PVE-2 --inject-monmap tmpfile
2019-06-26 13:45:58.372444 7fc9707b0100 -1 Invalid argument: /var/lib/ceph/mon/ceph-PVE-2/store.db: does not exist (create_if_missing is false)

2019-06-26 13:45:58.372468 7fc9707b0100 -1 error opening mon data directory at '/var/lib/ceph/mon/ceph-PVE-2': (22) Invalid argument
 
Last edited:

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
3,954
369
88
Code:
ceph-mon -i <ID> --mkfs
You can recreate the DB with the above command.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!