Ceph - question before first setup, one pool or two pools

2*12 core
Watch out for NUMA, best see that the NIC and the SAS/SATA controller sit on the same NUMA node, for lower latency.

each node should give approx 10TB usable redundant data
With a replica of 3, a copy of an object will be on each node. There is no data redistribution. This starts only when a forth node is added. The pool should get ~30 TB, with two disks failures the size would be reduced to ~22 TB.
 
Hence my hint to AMD, since even on their 64 Core (128Thread) flagship, there are only two NUMA Nodes. I Suppose on CPUs with lower core count, this might be even only 1 NUMA Node for eg. 12 Core Systems. But the latter is just my guessing and to be checked beforehand.
 
Hence my hint to AMD, since even on their 64 Core (128Thread) flagship, there are only two NUMA Nodes. I Suppose on CPUs with lower core count, this might be even only 1 NUMA Node for eg. 12 Core Systems. But the latter is just my guessing and to be checked beforehand.
The Zen2 architecture has only one node, for dual socket systems. The CPU itself does not have NUMA, the memory location is uniform.
 
Wow, all of this fits into a 1U Server?

This setup sounds reasonable. What Controller will you be using for the SSDs? Are you going for an Intel or AMD based system? Just asking because AMD made a big leap in performance and seems very good at performance per buck.
the hardware is not the latest (refurbished)
that is what is currently (waiting for final quota )
  • Intel® Xeon® Processor E5-2697 v2
  • 12*32gb DDr3
  • 2x40 mellanox card
  • 2x 128/256 ssd in raid 1 for os
  • 8x4TB ssd if the prices will be high , we wil go for 2U server with 24*2TB drives
i know it is not the newest but we have made already a fast zfs server on this hardware that works quite fast, if it wont work, we will get a newer hardware and reuse the ssds (the ssds cost x3 the hardware :) )

Watch out for NUMA, best see that the NIC and the SAS/SATA controller sit on the same NUMA node, for lower latency.
we don't have almost any writes, mostly reads, so latency is not an issue ( we are on HDD bases so it must be better even in the worst situation)


With a replica of 3, a copy of an object will be on each node. There is no data redistribution. This starts only when a forth node is added. The pool should get ~30 TB, with two disks failures the size would be reduced to ~22 TB.
i know,, but if it all work well (stable and passes or testing ) we will add more nodes
 
Last edited:
we don't have almost any writes, mostly reads, so latency is not an issue ( we are on HDD bases so it must be better even in the worst situation)
Latency is always there. And it will haunt you in distributed systems. ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!