Hi Everyone,
I'm about to deploy a sizable CEPH environment for proof of concept purposes and I want to consider benchmarks vs real world usage. Therefore, I have a few clarifying questions that I don't think my shiny PVE Advanced Certification answered:
I've read the following CEPH Benchmarking guides for reference:
Thanks for any answers, guidance, or help you can offer.
Tmanok
I'm about to deploy a sizable CEPH environment for proof of concept purposes and I want to consider benchmarks vs real world usage. Therefore, I have a few clarifying questions that I don't think my shiny PVE Advanced Certification answered:
- What counts as an RBD client in PVE?
- Is each resource/guest (VM/LXC) an RBD client?
- Is each guest virtual disk (or mount point) an RBD client?
- How are the number of clients configured?
- Are their limits per PVE Node or Cluster?
- Can guests be configured to use more clients (per guest or per virtual disk?)
- What benchmarking guidance would you offer me?
- At what point do I need/want to have bare metal / separated monitor nodes? (as seen in Supermicro's setup for example)
- Do monitor nodes really need as much RAM as some of these benchmark setups suggest? My experience tells me Supermicro's approach is perhaps overkill but NVMe is a different animal to me and we're moving in that direction...
- Many of the setups suggest 5 nodes but only say 60 OSDs total are added. I am looking at 80-90 OSDs approximately across 4 nodes. If the memory is sufficient, does this seem acceptable or would it be better to go with more nodes?
I've read the following CEPH Benchmarking guides for reference:
- Both PVE CEPH Benchmark PDFs
- Supermicro's CEPH Benchmark PDF
- Dell's CEPH Benchmark PDF
- UTSA's Benchmark PDF (oof)
- CERN's Old 2015 Benchmark PDF
Thanks for any answers, guidance, or help you can offer.
Tmanok