CEPH : Nice simple one's :)

Apr 18, 2019
61
5
8
53
After running CEPH for a year, I decided to replace a node as the performance was not what I expected. (Two days later, insert multiple expletives, one less keyboard and a few grey hairs)

First thing - double check you have put in the right network card.
-> those 2x 10GB look very similar to those 2 x 1GB cards -> lesson learnt
--> just removing the OSD from the node 3 with the wrong card increased CEPH performance 3x

Second - check the PING -> if it is above 1 ms then something is wrong.
-> turns out one of the 10GB ports on the switch was dead. Not happy - took may tests to find it running at about 100MB but still showing Green (10GB).
--> lesson learnt

SO NOW EVERYTHING IS FIXED

This isn't home network this is SMB (Small to Medium Business) so flame away.....

Simple question

4 node Proxmox system

2 to 4 OSD's on each Node (currently 2xSSD (consumer) and plan to add 2X??? (see below) )

20-40 Windows VM's in total over all nodes

The Nodes CPU's are Threadripper , Threadripper , i7 , Dual E5 (mixture of AMD and Intel) with lots of RAM
Running all VM's -> this is the memory usage
1600249611902.png

Each node will have 2x 10GB + multiple 1GB (once I fix node 3)

Simple question #1

Should I change the CEPH CLUSTER NETWORK to the slower 1GB network ( all the pings now work ) Rather than use the 10GB network for both. Apparently it is better.
OR
given the system isn't very big should I just leave it. NOTE: the NICS are there and come with the motherboard, doing nothing at the moment.


Simple Question #2

Preamble: With Ceph I understand that Enterprise SSD's are the best and I will only replace the current ones with Enterprise SSD
-> I has a Consumer SSD Fail (sometime recently) and replace it today - latency was in the 100's and VM's were blue screening

What are the recommendations for adding an extra (please assume Enterprise or NAS quality) ???
2x 256 GB SSD
or
2x 1 TB HDD

Just to clarify -> In other words is the network going to throttle the choice of Drive regardless?

Will a 10 GB network perform the same with a HDD or a SSD?

I am running a small test now in the node I have replaced and it looks to make a huge difference HOWEVER this is while it is re balancing
1600250573318.png
1600250688692.png

Simple Question #3
Some on the nodes have 1 or 2 unused 1GB NIC's (maybe more depending on the answer to #1)
Any point in using them as well?
Any hints as to what to do?

thanks
again
Damon
 
Last edited:
Oct 11, 2018
98
18
8
USA
Simple Answer 1: NO
Small clusters do not benefit from a separate CEPH cluster network (as opposed to a combined CEPH public/cluster network). See Network Configuration Reference [0]

Simple Answer 2: YES
A 10G network will perform the same regardless of the storage device, e.g, the latency and bandwidth don't change. In general, SSD performance will allow fewer devices to saturate that bandwidth compared to HDD, but your decision should be based on your workload and budget. Buy the best performance you can afford for the minimum capacity you need (including immediate growth). However, if you only consider benchmarks, you will build a cluster that doesn't support your business.

Simple Answer 3: YES
For CEPH, use 2 x 10GB in LACP, layer 3+4 hashing, fast rate (assuming your switch supports the same; otherwise, active/backup mode)
For PVE, use 1 X 1GB for Corosync2/Knet dedicated for link 0 -- private (no other traffic) OR 2 X 1GB for Corosync3/Knet dedicated for link 0/link 1 AND 2-4 x 1GB in bond (LACP or active/backup) for management, VMs, etc in VLANS.

[0] if that link is down, try alternative Network Configuration Reference (I liked the original link better).
 
Last edited:
  • Like
Reactions: Alwin and damon1
Apr 18, 2019
61
5
8
53
Simple Answer 1: NO
Small clusters do not benefit from a separate CEPH cluster network (as opposed to a combined CEPH public/cluster network). See Network Configuration Reference [0]

Simple Answer 2: YES
A 10G network will perform the same regardless of the storage device, e.g, the latency and bandwidth don't change. In general, SSD performance will allow fewer devices to saturate that bandwidth compared to HDD, but your decision should be based on your workload and budget. Buy the best performance you can afford for the minimum capacity you need (including immediate growth). However, if you only consider benchmarks, you will build a cluster that doesn't support your business.

Simple Answer 3: YES
For CEPH, use 2 x 10GB in LACP, layer 3+4 hashing, fast rate (assuming your switch supports the same; otherwise, active/backup mode)
For PVE, use 1 X 1GB for Corosync2/Knet dedicated for link 0 -- private (no other traffic) OR 2 X 1GB for Corosync3/Knet dedicated for link 0/link 1 AND 2-4 x 1GB in bond (LACP or active/backup) for management, VMs, etc in VLANS.

[0] if that link is down, try alternative Network Configuration Reference (I liked the original link better).
Brilliant, will look at the best / low latency SSD or HDD we can afford

Thanks for the answer to #3, I will start to Google what it all means. But I appreciate I have a starting point for the next project.

Many Thanks
Damon
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!