[SOLVED] Ceph Public/Private And What Goes Over the network

Donovan Hoare

Active Member
Nov 16, 2017
28
6
43
43
Good Day All.

Im new to ceph. i have set up a 6 node cluster.
Its a mixture of SSD AND sas drives.
With the sas drives i use a ssd partition for the db.

Now what im experiencing is that my VMs are slow.
Boot is slow Opening programs are slow etc etc.

the 10.0.45.0/24 network is 10Gig
the public network is 192.168..14.0/24 is on a bonded 1GB network.
This happens to also be the iprange that i connect to proxmox gui on.

I assumeed the public network would not carry much traffic.
and all ceph traffic was on the cluster network
However, i read a post that the public network is traffic-heavy and when he changed it to a separate network of 40G it went faster.

So my question is.
a) What flows over the Public Network.
b) Would my VM boot speed and opening programs be faster if the public network is a different physical network.

c) if b=yes how would i change the public network safely

Regards

Here is my config:
Code:
[global]
    auth_client_required = cephx
    auth_cluster_required = cephx
    auth_service_required = cephx
    cluster_network = 10.0.45.21/24
    fsid = 32e62262-67a6-4129-9464-773375643266
    mon_allow_pool_delete = true
    mon_host = 192.168.14.21 192.168.14.22 192.168.14.23
    ms_bind_ipv4 = true
    ms_bind_ipv6 = false
    osd_pool_default_min_size = 2
    osd_pool_default_size = 3
    public_network = 192.168.14.21/23
 
have a look at the diagram at https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/
The clients would be the VMs in your case.

The Ceph Public network is mandatory and a lot of traffic is going over it. The optional Ceph cluster network can be used to move the inter OSD replication traffic to a different network to distribute the load more.

You can move the Ceph Public network over to a different subnet. For example the current Ceph Cluster one to get it to use 10G instead of 1G.
For this, you can change the public_network to the same value as the cluster_network line. Then restart the OSD services.

E.g. per node with systemctl restart ceph-osd.target. The MONs and MGR can be destroyed and recreated one by one.

Between each restart and destry/recreate step, wait for the cluster to be healthy before you continue.
 
So To confirm hwo todo this.

on EACH node i edit /etc/pve/ceph.conf
i change
Code:
public_network = 192.168.14.21/23
to
public_network = 10.0.45.21/24

save the file
then run
Code:
systemctl restart ceph-osd.target

Then do i restart the 1st node then delete and create a mon, then restart the second then destroy and recreate teh mon

Or do i restart all the ceph-osd then only destroy and recreate the montiros and managers. and wait for for the status to become healthy before i do the next monitor and manager
 
Last edited:
Ok So i went ahead.
So exact order. You edit the file on any node.

then restart the services on each node one by one, each time waiting for the cluster to become heealthy / or no yellow degrade items.

then you do the same with the monitors then after the monitors the managers
 
I assume it worked? You can also use ss -tulpn | grep ceph to see if there are still Ceph services listening on the old IP addresses.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!