ceph network config

mike2

New Member
Feb 18, 2025
2
0
1
Hi,

I have installed Proxmox for testing and configured a ceph cluster on four nodes.

each node has two network cards.
eno1 or vmbr0 - 10.40.1.0/24
enx5c857e3ccc6f without a bridge with configured IP on the interface - 10.41.41.0/24

I use vmbr0 to manage Proxmox and as a bridge to the internet for VMs and LXC.

enp2s0 is only for the ceph cluster.

During the cluster configuration I selected the 10.41.41.0/24 network as link 0
So this is the cluster network

Then during the ceph configuration I selected the networks:
- public - 10.40.1.0/24 i.e. vmbr0
- cluster - 10.41.41.0/24 i.e. enx5c857e3ccc6f

When creating monitors in ceph the IP to each node is from the 10.40.1.0/24 network i.e. vmbr0

I did a test now and from node1 I disconnected the cable from vmbr0 and after a few seconds the ping to the vmki on node2 stopped responding.

I am just getting to know ceph and I am testing it, but it seems to me that I did something wrong.

Shouldn't the monitor IP be from the 10.41.41.0/24 network?

Below is my ceph.conf file from node1
Code:
cat /etc/ceph/ceph.conf
[global]
        auth_client_required = cephx
        auth_cluster_required = cephx
        auth_service_required = cephx
        cluster_network = 10.41.41.51/24
        fsid = 30e8e94e-ffbb-437c-93da-d0af522160e0
        mon_allow_pool_delete = true
        mon_host = 10.40.1.51 10.40.1.52 10.40.1.53 10.40.1.54
        ms_bind_ipv4 = true
        ms_bind_ipv6 = false
        osd_pool_default_min_size = 2
        osd_pool_default_size = 3
        public_network = 10.40.1.51/24

[client]
        keyring = /etc/pve/priv/$cluster.$name.keyring

[client.crash]
        keyring = /etc/pve/ceph/$cluster.$name.keyring

[mon.pve01]
        public_addr = 10.40.1.51

[mon.pve02]
        public_addr = 10.40.1.52

[mon.pve03]
        public_addr = 10.40.1.53

[mon.pve04]
        public_addr = 10.40.1.54

and interfaces file from node1:
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual
        post-up ethtool -K eno1 tso off gso off

auto enp2s0
iface enp2s0 inet static
        address 10.47.1.140/24

auto enx5c857e3ccc6f
iface enx5c857e3ccc6f inet static
        address 10.41.41.51/24

auto vmbr0
iface vmbr0 inet static
        address 10.40.1.51/24
        gateway 10.40.1.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

source /etc/network/interfaces.d/*

Thanks for any sugestions!
 
Last edited:
I watched two videos about ceph configuration again and I had an error during ceph configuration.

As public network and cluster network I should set network 10.41.41.0/24 - enx5c857e3ccc6f

I set networks:
- public 10.40.1.0/24 - vmbr0
- cluster 10.41.41.0/24 - enx5c857e3ccc6f

Due to this, monitors were created in the public network on vmbr0.

After disconnecting vmbr0 from the network, there was a problem with communication with VMs on other nodes.

After the change I noticed a minimal decrease in IO Delay and now after disconnecting node1 VMs on nodes 2, 3 and 4 have no problem with communication