Proxmox Ceph network configuration

zeuxprox

Renowned Member
Dec 10, 2014
92
5
73
Hi,

first of all I'm very sorry for my poor English...

In few weeks I have to setup a Proxmox/Ceph cluster of 3 Nodes. Each node is a Ceph node end a Proxmox node in wich I virtualize some VMs and some LXC (Debian 9) containers.
Each node has:
CPU: 2 x Intel Xeon Gold 6130
RAM: 196 GB
SSD: 2 x SSD class enterprise (
NVMe: 1 x 800 GB
HDD: 6 X 8GB HGST Ultrastar HE10
Controller Raid: Adaptec ASR-8405
Controller HBA: LSI 9305-16i
4 x 10Gb nic
4 x 1Gb nic

I'll configure 2 SSD in RAID 1 (proxmox installation). NVMe is for journaling and HDDs for OSD.

My doubts are about network configuration of Ceph and Proxmox Cluster.
I would like reach this goal:

Ceph cluster network: 10.10.1.0/24 (1 x 10 Gb port)
Ceph public network: 10.10.2.0/24 (1 x 10 Gb port)

Proxmox cluster network: 10.10.3.0/24 (1 x 1 Gb port)

Proxmox public network: 2 x 1 Gb ports (this port will be associated/bridged to the VMs/LXC containers)
What I want is that OSDs use Ceph cluster network (10.10.1.0/24) for replication and for all other job they have to do to work correctly while Ceph public network is used by proxmox and other (eventually) clients to read an write in Ceph storage.

Is correct this configuration? If Yes, how can be done in Proxmox?

Thank you very much
 
Hi,

the best and easiest way is to change the config /etc/pve/ceph.conf after you call pveceph init.

Your network setting is ok .
But it would be better to use 2 nics with 2 rings for corosync.
 
So, if I have understood well, Proxmox, and eventually other clients, will use Ceph Public network (10.10.2.0/24) to read and write on Ceph Cluster, but if so, I have to add a 10 Gb port to Proxmox and connect it to Ceph Public network, right?

Thank you very much
 
Last edited:
I have to add a 10 Gb port to Proxmox
I do not exactly understand this.
You wrote you like a 10 GBit for Ceph cluster and 10GBit for Ceph public.
 
The Ceph network configuration is:
1 x 10 Gb for Ceph Cluster (communication between OSD)
1 x 10 Gb fo Ceph Public (on this interface Ceph "listen" the client requests)

My doubt is: which interface does Proxmox use to connect to Ceph Public? I need to use another 1 x 10 Gb port connected to Ceph Public network to connect Proxmox to Ceph Public network, right?
 
If your Ceph and Proxmox Node are the same you need no extra client Nic.

The setup with your requirements work like this.

Create a Proxmox VE cluster with this 3 nodes.

Install ceph with
Code:
pveceph install --version luminous

Then set up the init config with the public network
Code:
pveceph init --network 10.10.2.0/24

After the initial config is done create on all there nodes a monitor
Code:
pveceph createmon

Now you can add the cluster network to the config
add this line under public network in /etc/pve/ceph.conf

cluster network = 10.10.1.0/24

Now you can create your osd witch communicate on 10.10.1.0/24 network.[/CODE]
 
  • Like
Reactions: cikoshel
Corosync with two rings can recover partial sections in case of failure.
Two separat ring are more robust in case of interfering in one network.
It is one layer of complexity less because you are using a plain nic and not a bond.
Bond can produce extra latency through the bonding algorithms.
 
What can you say me about a second ring also for Ceph cluster network? I was thinking: there is the possibility to configure a secondary Ceph cluster network that save me in case of the primary Ceph cluster network stops to work (Switch failure) ?

Tahank you
 
I am currently testing proxmox on a configuration with three nodes and ceph storage on node.
I created a cluster and initialised ceph with a dedicatied network equipement (one NIC par node and a switch) for ceph (public and cluster on the same network).
But this network is a SPOF, if the dedicated switch become down, ceph stops.

On a SAN configuration, two SAN fabric is needed but i know, ceph is not a traditional storage.

So, what's the good practice for not SPOF on our infrastructure ?
May be Bonding ? but with this problem :
Bond can produce extra latency through the bonding algorithms.

A better solution ?
Thx
 
On this config if I change network
Ceph public network: 10.10.2.0/24 (1 x 10 Gb port)
Ceph cluster network + Proxmox cluster network : 10.10.1.0/24 (2 x 10 Gb port, ring 0, ring 1)
VM public network: 2 x 1 Gb ports (this port will be associated/bridged to the VMs/LXC containers)
Proxmox managerment: 1Gb port.
Is correct this configuration?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!