Ceph Public and Cluster networks with different configuration

zeuxprox

Renowned Member
Dec 10, 2014
96
6
73
Hi,

within a few weeks I will have to configure a Proxmox 5-node Cluster using Ceph as storage. I have 2 Cisco Nexus 3064-X switches (48 ports 10Gb SFP+) and I would like to configure the ceph networks, for each node, in the following way::
  • Ceph Public: 2 x 10Gb Linux Bond in Active/Standby mode;
  • Ceph Cluster: 2 x 10Gb Linux Bond in LACP mode;
For the Ceph Cluster network, I will configure a vPC on each switch (knows as MLAG for other switches). In this way I will have a 20Gb/s for the Ceph Cluster network. Consider that each node will have 4 x 6.4 TB Micron 9300 MAX NVMe disks (Proxmox will be installed on 2 x 128GB SuperDOM in zfs raid 1).
Is this a good configuration or should I change it?

Thank you
 
Hi,

Why the Ceph public network only Active/Standby?
Ceph works well with LACP because it has many different destinations.

(Proxmox will be installed on 2 x 128GB SuperDOM in zfs raid 1).
In the past, these devices are known for a low lifetime.
I would use enterprise SATA disk. They cost the same and last longer.