Can not install CEPH on freshly-reinstalled nodes

NdK73

Renowned Member
Jul 19, 2012
97
6
73
Bologna, Italy
www.csshl.net
Hello.

It's about a week I'm banging my head on this.
I had a 9-nodes cluster (virtN, N=1..9, 192.168.1.3N/24). I now have to replace all the nodes with "new" hardware, so I started from the nodes 4..6.
As described in the docs:
- shutdown virtX and start install on new HW, so no risk old virtX coming alive again
- from virt1 (currently not being reinstalled):
- pvecm delnode virtX
- edit /etc/ssh/ssh_known_hosts to remove the two lines pertaining to virtX
- pvecm updatecerts
After reinstall on new virtX (from freshly downloaded 7.2 ISO) is complete, from the new node web interface I:
- disable pve-enterprise and enable pve-no-subscription repo
- upgrade all packages
- join cluster (by pasting join data obtained from virt1 and entering correct root password; tried both with default value
- reboot
Repeat for the other nodes.
Now, from every virt I can see all the nodes and ssh without issues between new and old ones.
So I assume everything is OK. If more tests are needed I can do them.

Now the real problem starts: I start following the guide to install Ceph. I first tried with quincy.
On virt4: select 'virt4', select 'ceph', click 'install ceph'. It asks for the release (tried both quincy and pacific, no difference) and for the network to use (I select the node address, 192.168.1.34/24). The install seems to proceed and I (often) get mon.virt4 and mgt.virt4 processes listed. Sometimes it died with a timeout, but mon and mgr processes eventually appeared.
"ceph status" on virt4:
Code:
root@virt4:~# ceph status
  cluster:
    id:     40833458-1c2a-45a0-9216-76710d9f3f7e
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 1 daemons, quorum virt4 (age 9m)
    mgr: virt4(active, since 8m)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:
Again, assuming it's OK I proceed on virt5.

I select "virt5". "select "ceph", click "install ceph", it detects the existing Ceph instance: "Newest ceph version in cluster is Pacific (16.2.9)", I confirm that 16.2 is to be installed on the node, it installs the packages then the timeouts start. No way to configure a monitor on virt5. Every ceph-related operation ends in an error like "Could not connect to ceph cluster despite configured monitors (500)".
Trying ceph status from cli, results in messages like "2022-09-19T11:23:22.837+0200 7f76e125d700 0 monclient(hunting): authenticate timed out after 300".

Any hint before I ditch everything?

Tks.

Diego
 
Hi,

did you do any kind of cleanup for Ceph? Removing OSDs, MONs ... from the cluster? It sounds a bit like there are old keys in the config somehow.
 
Well, I did many tries reinstalling, and to avoid troubles I tried to delete everything ceph-related between tests. That includes running "pveceph purge" on virt4 (after turning off virt5 and virt6) and manually cleaning /etc/ceph/* and /etc/pve/priv/ceph* from virt1 after turning off virt4.
But the issues started the first time I tried installing Ceph, so there should have been no previous config around...

Edit: forgot to say that when trying to use "pveceph mon create" on virt5 I get "Could not connect to ceph cluster despite configured monitors".
 
Last edited:
pveceph purge only destorys data on that machine. There is a shared config on the still existing nodes. When I'm removing a node that has Ceph installed I can still see it in my Ceph OSD view in the WebGUI. You would need to remove monitors [1], OSDs [2] from the old node first (if you didn't do that before).

And after removing them check your ceph auth ls if there are still keys left from the old node you can remove those via ceph auth del [keyname].



[1] https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/#removing-monitors
[2] https://docs.ceph.com/en/latest/rados/operations/add-or-rm-osds/#removing-the-osd
 
Well, re "cveceph purge" deleting data only from that machine I found many old threads that said otherwise (someone lost all VM data), but since I haven't created OSDs yet, that's not a problem.
Currently I didn't start cleanup and I have:
Code:
root@virt4:~# ceph auth ls| sed 's/key: .*/key: REDACTED/'   
installed auth entries:

client.admin
        key: REDACTED
        caps: [mds] allow *
        caps: [mgr] allow *
        caps: [mon] allow *
        caps: [osd] allow *
client.bootstrap-mds
        key: REDACTED
        caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
        key: REDACTED
        caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
        key: REDACTED
        caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbd
        key: REDACTED
        caps: [mon] allow profile bootstrap-rbd
client.bootstrap-rbd-mirror
        key: REDACTED
        caps: [mon] allow profile bootstrap-rbd-mirror
client.bootstrap-rgw
        key: REDACTED
        caps: [mon] allow profile bootstrap-rgw
mgr.virt4
        key: REDACTED
        caps: [mds] allow *
        caps: [mon] allow profile mgr
        caps: [osd] allow *

So it seems the only keys around are the ones just created...
 
How many monitors did you have configured before reinstalling some of your nodes? Did something change for example the network used for ceph?
 
I got errors while installing the second monitor (virt5) since the first try. The only one that starts is the one on virt4.
The first time I had the network configured as a bridge over balance-alb bond including two eno. After reinstall I kept the default config of brige over a single eno to rule out issues due to networking.
 
Trying to avoid a reinstall, I issued "pveceph purge" on virt5 and then on virt4.
On both nodes /etc/ceph/ still exists and contains dangling symlinks:
Code:
root@virt5:~# ls -l /etc/ceph/
total 4
lrwxrwxrwx 1 root root 18 Sep 19 11:14 ceph.conf -> /etc/pve/ceph.conf
-rw-r--r-- 1 root root 92 Mar  8  2022 rbdmap
and
Code:
root@virt4:~# ls -l /etc/ceph/
total 8
-rw------- 1 ceph ceph 151 Sep 19 11:08 ceph.client.admin.keyring
lrwxrwxrwx 1 root root  18 Sep 19 11:08 ceph.conf -> /etc/pve/ceph.conf
-rw-r--r-- 1 root root  92 Mar  8  2022 rbdmap
Now going the "rm -rf /etc/ceph" route... let's see... Reinstalling from CLI after cleanup:
Code:
root@virt4:~# pveceph install
update available package list
start installation
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
gdisk is already the newest version (1.0.6-1.1).
nvme-cli is already the newest version (1.12-5).
ceph-common is already the newest version (16.2.9-pve1).
ceph-fuse is already the newest version (16.2.9-pve1).
ceph-mds is already the newest version (16.2.9-pve1).
ceph is already the newest version (16.2.9-pve1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

installed ceph pacific successfully!

reloading API to load new Ceph RADOS library...
root@virt4:~# pveceph init --network 192.168.1.0/24
creating /etc/pve/priv/ceph.client.admin.keyring
cp: cannot create regular file '/etc/ceph/ceph.client.admin.keyring': No such file or directory
command 'cp /etc/pve/priv/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring' failed: exit code 1
root@virt4:~# pveceph init --network 192.168.1.0/24
root@virt4:~#
Uhm.. already something strange: I've had to repeat the init. I did nothing between the two invocations, just let some time(about 10s) pass. Could it be something replication-related?

Let's continue anyway:
Code:
root@virt4:~# ls -l /etc/ceph/ceph.conf 
lrwxrwxrwx 1 root root 18 Sep 19 15:51 /etc/ceph/ceph.conf -> /etc/pve/ceph.conf
root@virt4:~# ls -l /etc/pve/ceph.conf 
-rw-r----- 1 root www-data 439 Sep 19 15:51 /etc/pve/ceph.conf
root@virt4:~# ls -l /etc/pve/priv/ceph.client.admin.keyring 
-rw------- 1 root www-data 151 Sep 19 15:51 /etc/pve/priv/ceph.client.admin.keyring
root@virt4:~# pveceph mon create
unable to get monitor info from DNS SRV with service name: ceph-mon
creating new monitor keyring
creating /etc/pve/priv/ceph.mon.keyring
importing contents of /etc/pve/priv/ceph.client.admin.keyring into /etc/pve/priv/ceph.mon.keyring
monmaptool: monmap file /tmp/monmap
monmaptool: generated fsid 78fa4b7e-f409-45e2-bb88-aa8ac91140b6
epoch 0
fsid 78fa4b7e-f409-45e2-bb88-aa8ac91140b6
last_changed 2022-09-19T15:55:38.563141+0200
created 2022-09-19T15:55:38.563141+0200
min_mon_release 0 (unknown)
election_strategy: 1
0: [v2:192.168.1.34:3300/0,v1:192.168.1.34:6789/0] mon.virt4
monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
created the first monitor, assume it's safe to disable insecure global ID reclaim for new setup
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@virt4.service -> /lib/systemd/system/ceph-mon@.service.
creating manager directory '/var/lib/ceph/mgr/ceph-virt4'
creating keys for 'mgr.virt4'
setting owner for directory
enabling service 'ceph-mgr@virt4.service'
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@virt4.service -> /lib/systemd/system/ceph-mgr@.service.
starting service 'ceph-mgr@virt4.service'
root@virt4:~#
This seems OK: it created first monitor and a manager, as expected.
Now going on virt5:
Code:
root@virt5:~# pveceph install
update available package list
start installation
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
gdisk is already the newest version (1.0.6-1.1).
nvme-cli is already the newest version (1.12-5).
ceph-common is already the newest version (16.2.9-pve1).
ceph-fuse is already the newest version (16.2.9-pve1).
ceph-mds is already the newest version (16.2.9-pve1).
ceph is already the newest version (16.2.9-pve1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

installed ceph pacific successfully!

reloading API to load new Ceph RADOS library...
root@virt5:~# pveceph mon create
Could not connect to ceph cluster despite configured monitors
BANG! :( What's wrong?
 
Last edited:
I might have found the issue: the FIREWALL!
Seems Proxmox is not adding rules to allow CEPH connections between cluster nodes (neither pings, it seems... that's what rang a bell: "why can't I ping virtX from virtY even if I can use ssh?"). Just disabling the firewall "automagically" lets CEPH monitors become alive. Going to manually add rules. IMHO it should at least be documented. Even better if web-gui-managed nodes are automatically added to the rules.
 
  • Like
Reactions: RocketSam

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!