Ceph Storage

Jun 25, 2022
68
7
13
dear sir

i just installed proxmox cluster in three Dell 750, with ceph storage in dell enterprise NVME, every thing working fine, but unwillingly make a small mistake in ceph configuration. my plan was to segregate the ceph public and cluster network in two different NIC, but some unknown reason now what i am observing that my cluster and public network is attached in a same subnet in a single nic, request you to kindly guide how could i split the public and cluster network in two different subnet (different NIC) after installing the ceph cluster ? or i need to reinstall everything from scratch ? please help

Bhaskar p Banerjee
 
If you want to configure the optional cluster network on to a different network, it shouldn't be too hard. First, configure the network and make sure that all nodes can ping each other. Ideally, you would also use a large MTU for the Ceph networks to get a better ratio of data to protocol overhead. If you do that, make sure that you ping with larger, non fragmenting ping packets.

Once that is set up and the nodes can ping each other, you can change the cluster network in the /etc/pve/ceph.conf file (the line should already be there with the same network as the public one). Then restart the OSD service and they should bind to the cluster network.

You can observe the change with ss -tulpn | grep ceph if you run it before and after the OSD restarts.
 
thanks for the reply,
should i need to change the cluster network in one server or need to change the value in all three server ? as i changed the network in server 1, the copy is instantly shared with rest of the two server, but with the cluster network value of the first server in all three node ..... confusing to me ? and presently i am not added any osd to the ceph cluster. i just installed it
kindly guide
 
Last edited:
If you edit anything in the /etc/pve directory, the changes are synchronized across the cluster. Some config files are actually symlinks to the /etc/pve directory. The /etc/ceph/ceph.conf for example is just a symlink:
Code:
root@cephtest1:~# ls -l /etc/ceph/
total 1
lrwxrwxrwx 1 root root 18 Oct 14  2021 ceph.conf -> /etc/pve/ceph.conf

If you do not have any OSDs yet, changing the cluster_network setting in the ceph config should be no problem at all, as the cluster network is used by the OSDs for replication and heartbeat traffic.
 
thanks for the reply, it resolve the problem, i have another simple question in mind, if i use linux bond mesh (broadcast) network in pve cluster creation (not ceph cluster) , is that will create some kind of problem in future ? as during pve cluster creation process there is a way to add additional network interface for failsafe ? shall it be allowed ? kindly guide if possible ?
bhaskar
 
The broadcast full mesh variant expects that the network itself will not fail, but a full node. The Full Mesh wiki page explains the different failure scenarios quite well regarding Ceph.
I am not sure how Corosync will react if all nodes are still up, but one of the network connections fails. I will test that out at some point in the future.

You can add additional interfaces (links/rings, the name is interchangable) to your Corosync config to give it more options to fall back in case one of the connections is down or otherwise not usable. The admin guide has a chapter on how you need to edit the /etc/pve/corosync.conf file.

Please give Corosync at least one physical network to itself, can be a 1 Gbit. A stable Corosync connection is important, especially if you want to use HA.
 
thanks Aaron, all my network cards are 10g in dell r750, at present i am planning to attach three set of nic in LACP ovs bond for proxmox cluster network. the problem is i do not have 10g port left in my switch to connect these 6 (2xthree server) lan port, that is why i am planning to connect them using mesh topology, what ever switch port available, i will be able to connect two set of nic in proxmox cluster network....... better i shall wait for your testing result for this purpose to add these port in future, at this moment i am keeping them unused, rest of the 6 port i am using for ceph mesh network for ceph public and normal switch side for ceph cluster in a different vlan, kindly guide. bhaskar
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!