[SOLVED] ceph config issue blocks terminal login

gowger

Member
Jan 30, 2019
21
0
21
113
I have a config issue with my ceph cluster and can only access via remote KVM. it's impossible to log in because the console output from ceph

"libceph: bad option at conf=/etc/pve/ceph.conf"

kicks in before the password can be typed, and somehow messes up the terminal input.

Is there any way to boot in interactive mode, so that I can correct the ceph config file and reboot successfully?
 

Attachments

  • lantronix.png
    lantronix.png
    110.9 KB · Views: 8
These message are in the foreground. You can still login, just keep typing the password. ;)

Also please post the ceph.conf, there is a config error. You can find the config in the GUI as well.
 
These message are in the foreground. You can still login, just keep typing the password. ;)

Also please post the ceph.conf, there is a config error. You can find the config in the GUI as well.

Well it was a bad combination of circumstances. As the router is a virtual appliance, which was dependent on functioning cephfs, (I have removed this dependency) the only access I had was through a KVM device provided by the data centre. Their KVM device uses a Java applet, and that appears to be messed up by foreground logs. As such, you had to type the password very quickly to get past it.

As I use long root passwords, this was impossible. The java applet also did not accept automated keystrokes I sent to it via xdotool. I temporarily set a short password using by using grub bootloader to pass single user mode kernel param, and eventually got in that way. I didn't manage to edit the config files directly in single user mode.

The root cause didn't seem to be to do with a bad ceph.conf , rather a failure of cephfs which was introduced during a package update. I didn't upgrade ceph from luminous to nautilus yet.


Code:
[global]
         auth client required = cephx
         auth cluster required = cephx
         auth service required = cephx
         cluster network = 192.168.1.0/24
         fsid = 77a03fa2-b35a-4304-98fb-c0f91d5e1fc5
         mon allow pool delete = true
         osd journal size = 5120
         osd pool default min size = 2
         osd pool default size = 3
         public network = 192.168.1.0/24
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
         keyring = /var/lib/ceph/mds/ceph-$id/keyring

[osd]
         keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mds.pve]
         host = pve
         mds standby for name = pve

[mds.pve2]
         host = pve2
         mds standby for name = pve

[mon.pve]
         host = pve
         mon addr = 192.168.1.141:6789

[mon.pve3]
         host = pve3
         mon addr = 192.168.1.103:6789

[mon.pve4]
         host = pve4
         mon addr = 192.168.1.125:6789

[mon.pve2]
         host = pve2
         mon addr = 192.168.1.149:6789