Ceph cluster network configuration

Subbeh

New Member
Aug 4, 2022
10
1
3
I'm trying to configure Ceph on a 3 node cluster - 3 monitor nodes and 2 OSD nodes:

Code:
pve-nuc     - 10.0.10.11 (public network)
pve-opti-01 - 10.0.10.12 - 172.20.20.12 (cluster network)
pve-opti-02 - 10.0.10.13 - 172.20.20.13


I installed Ceph through the GUI and created the cluster network as described here. Both nodes can communicate with each other on the cluster network.

Now when I try to create the OSDs, it works fine on pve-opti-01 but I'm getting the following error on pve-opti-02:

Code:
No address from ceph cluster network (172.20.20.12/32) found on node 'pve-opti-02'. Check your network config. (500)

How am I supposed to configure the cluster network?

My ceph.conf file currently looks like this:

Code:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 172.20.20.12/32
fsid = xxxxxx
mon_allow_pool_delete = true
mon_host = 10.0.10.12 10.0.10.11 10.0.10.13
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 10.0.10.12/24

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.pve-nuc]
public_addr = 10.0.10.11

[mon.pve-opti-01]
public_addr = 10.0.10.12

[mon.pve-opti-02]
public_addr = 10.0.10.13
 
Why does the pve-nuc not have an IP address in the cluster network?
Can you post your network config? (/etc/network/interfaces)
 
Why does the pve-nuc not have an IP address in the cluster network?
Can you post your network config? (/etc/network/interfaces)

I solved the issue by changing subnet to /24 instead of /32

As for why pve-nuc is not part of the cluster network, only the pve-opti-* machines have a dual nic and additional storage for Ceph, so I only added these machines to the cluster.

I'm new to Ceph, and this is mostly just a learning experience, but if there's anything I could be doing better I'd love to hear it.
 
Well, if you want a somewhat realistic setup, having at least 3 Nodes with OSDs with a size/min_size of 3/2 is much closer to what you would run in production. Overall with Ceph, the more recources you give it (nodes, OSDs) the better it will performan and the easier it is to recover from single failures in the cluster.
 
Well, if you want a somewhat realistic setup, having at least 3 Nodes with OSDs with a size/min_size of 3/2 is much closer to what you would run in production. Overall with Ceph, the more recources you give it (nodes, OSDs) the better it will performan and the easier it is to recover from single failures in the cluster.

I'd love to do that, but unfortunately I only have 3 tiny pcs of which only 2 have room for an OSD disk and an additional network adapter.
Is there any downside in terms of functionality this way? Besides the redundancy and reliability concerns.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!