CEPH installation requires the mon-address

Kato98

Member
Feb 11, 2021
7
0
6
26
I was trying to install Ceph on a new 3-node Proxmox cluster. Therefore, I created two extra network bridges for my two 10G ports. The installation of Ceph was successful. However, when I wanted to configure the public and cluster networks, I encountered the following error:


Multiple IPs for ceph public network '10.100.233.12/24' detected on pvehps3: 10.100.233.11 10.100.233.12 10.100.233.13 use 'mon-address' to specify one of them. (500)


I read in this forum that you need to configure the Mon Address in your Ceph configuration.

Does anyone know how I can do this?
 
Yes, could you please post the ceph.conf `cat /etc/pve/ceph.conf`?
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.100.233.12/24
fsid = b2219c5f-c26b-4183-b142-c01e6aaa6576
mon_allow_pool_delete = true
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 10.100.233.12/24

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
 
Thank you for the ceph.conf!

You have to edit the /etc/pve/ceph.conf file using nano/vim or any editor you prefer (First make a backup to avoid any mitypo) `cp /etc/pve/ceph.con /root/ceph.conf-backup`

Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = <X.X.X.X>/24 # <== Add the next subnet on cluster_network here 
         fsid = b2219c5f-c26b-4183-b142-c01e6aaa6576
         mon_allow_pool_delete = true
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = 10.100.233.12/24 # <== Add the next subnet on public_network here.

If that didn't help, please post the output of `grep _network /etc/ceph/ceph.conf` command and the output of `ip a` and your network config `cat /etc/network/interfaces`
 
Why do you have 3 addresses in the same network in the same host?
What does your /etc/network/interfaces look like?
I just created two Linux bridges for Ceph and Backup. Is this incorrect?


auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

iface eno49 inet manual

iface eno50 inet manual

auto vmbr0
iface vmbr0 inet static
address 10.100.233.11/24
gateway 10.100.233.1
bridge-ports eno1
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 10.100.233.12/24
bridge-ports eno49
bridge-stp off
bridge-fd 0
#Fuer CEPH

auto vmbr2
iface vmbr2 inet static
address 10.100.233.13/24
bridge-ports eno50
bridge-stp off
bridge-fd 0
#Fuer Backup


Thank you for the ceph.conf!

You have to edit the /etc/pve/ceph.conf file using nano/vim or any editor you prefer (First make a backup to avoid any mitypo) `cp /etc/pve/ceph.con /root/ceph.conf-backup`

Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = <X.X.X.X>/24 # <== Add the next subnet on cluster_network here
         fsid = b2219c5f-c26b-4183-b142-c01e6aaa6576
         mon_allow_pool_delete = true
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = 10.100.233.12/24 # <== Add the next subnet on public_network here.

If that didn't help, please post the output of `grep _network /etc/ceph/ceph.conf` command and the output of `ip a` and your network config `cat /etc/network/interfaces`

I honestly don't know what I need to change. I would like to have both my public and cluster networks use the IP address 10.100.233.12. Is it already configured correctly?
 
you have three seperate interfaces on the same subnet. dont do that. Also, you really dont want ceph private traffic over a bridge.

Here's what I would suggest (without knowing what your use case is:)

Code:
auto vmbr0
iface vmbr0 inet static
        address 10.100.233.11/24
        gateway 10.100.233.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

auto bond0
iface bond0 inet static
      address 10.10.234.11/24
      bond-slaves eno49 eno50
      bond-mode balance-alb
      bond-miimon 100

and your ceph conf

public_network = 10.100.234.11/24
cluster_network = 10.100.234.11/24
 
you have three seperate interfaces on the same subnet. dont do that. Also, you really dont want ceph private traffic over a bridge.

Here's what I would suggest (without knowing what your use case is:)

Code:
auto vmbr0
iface vmbr0 inet static
        address 10.100.233.11/24
        gateway 10.100.233.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

auto bond0
iface bond0 inet static
      address 10.10.234.11/24
      bond-slaves eno49 eno50
      bond-mode balance-alb
      bond-miimon 100

and your ceph conf

public_network = 10.100.234.11/24
cluster_network = 10.100.234.11/24

Thanks for your reply, unfortunately I only have the 10.10.233.0/24 subnet available :(

And I want to use only eno49.

So I think


auto bond0
iface bond0 inet static
address 10.10.233.11/24
bond-slaves eno49
bond-mode balance-alb
bond-miimon 100[/CODE]

should be okay, right?

Or can I also do this via the GUI?
 
Thanks for your reply, unfortunately I only have the 10.10.233.0/24 subnet available
...

If this is a serious comment, you are going to have a hard time with ceph. I'd wholeheartedly recommend you read up on how tcp/ip networks work. specifically https://www.rfc-editor.org/rfc/rfc1918.html

should be okay, right?
a bond is meant when using multiple interfaces as your uplink. if you only intend to use a single interface, you dont need to create a bond. see https://wiki.debian.org/NetworkConfiguration, but more broadly- if this is a sufficiently important for you to pursue for more than "for fun," a book like https://www.oreilly.com/library/view/linux-for-networking/9781800202399/ https://www.amazon.com/Understanding-Linux-Network-Internals-Networking would go a long way to help. (edit- wrong book, that one is linux for network engineers ;)
 
Last edited:
...

If this is a serious comment, you are going to have a hard time with ceph. I'd wholeheartedly recommend you read up on how tcp/ip networks work. specifically https://www.rfc-editor.org/rfc/rfc1918.html


a bond is meant when using multiple interfaces as your uplink. if you only intend to use a single interface, you dont need to create a bond. see https://wiki.debian.org/NetworkConfiguration, but more broadly- if this is a sufficiently important for you to pursue for more than "for fun," a book like https://www.oreilly.com/library/view/linux-for-networking/9781800202399/ https://www.amazon.com/Understanding-Linux-Network-Internals-Networking would go a long way to help. (edit- wrong book, that one is linux for network engineers ;)


Thank you for the information. After some research, I found out that I can simply use a local network IP and it doesn't need to be configured at the router. I changed my IP in the Ceph configs to the new local IP.

But now I think because I couldn't finish the setup previously with the same network settings, it shows now: "rados_connect failed - No such file or directory (500)".

My CEPH Config is as follows:
Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 192.168.1.134/24
         fsid = b2219c5f-c26b-4183-b142-c01e6aaa6576
         mon_allow_pool_delete = true
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = 192.168.1.134/24


[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring



systemctl restart ceph* was also not helping :/
 
Last edited:
easiest way to get you there, assuming the configuration was never in production- delete all ceph daemons and issue

pveceph purge

on all nodes.

then verify connectivity on all relevent networks- vm, corosync, and ceph. Once thats verified, rerun the ceph wizard and pick the relevant interfaces.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!