Hi folks,
given are 3 nodes:
I know - its a small system, spinners are slow, latencies etc.
But how could i get best ceph-performance ?
Use the NVME as "normal" OSD with WAL on itself (same as the hdds)?
Use the NVME as WAL-device for the hdds? (Problem: if the NVME is broken, 8 HDDs are lost?)
Is it worth to add 2 more SSDs per node for WAL? (so 1 SSD for 4 HDD) or use them as OSDs aswell?
Need more RAM?
1 Monitor running on each node (i could move them to other machines - if recommended)
Would a fourth node give considerably better performance?
Still no Jumbo-Frames, is it really bad?
And still no link aggregation, could be possible.
If i start rados bench and monitor the node with atop on the node, ETH gets up to 50% used, several disks varying up to 90% usage - knowing that the WAL ist on same disk.
But: WAL is "small" - and luminous made for WAL on same disk !?
Would crush take different performance (classes hdd, ssd, nvme) in consideration when optimizing?
Recommendations welcome.
given are 3 nodes:
each node 10 GB network
each node 8 enterprise spinners 4TB
each node 1 enterprise nvme 1TB
each node 64 GB RAM
each node 4 Core cpu -> 8 threads up to 3.2 GHz
each node 2 more slots available for disks
each node´s OS is on a Superdom SSD
We want to use bluestore, 3/2 in the pools.each node 8 enterprise spinners 4TB
each node 1 enterprise nvme 1TB
each node 64 GB RAM
each node 4 Core cpu -> 8 threads up to 3.2 GHz
pveperf of cpu:
CPU BOGOMIPS: 47999.28
REGEX/SECOND: 2721240
each node latest proxmox of courseCPU BOGOMIPS: 47999.28
REGEX/SECOND: 2721240
each node 2 more slots available for disks
each node´s OS is on a Superdom SSD
pveperf of superdom:
BUFFERED READS: 247.56 MB/sec
AVERAGE SEEK TIME: 0.11 ms
FSYNCS/SECOND: 322.70
BUFFERED READS: 247.56 MB/sec
AVERAGE SEEK TIME: 0.11 ms
FSYNCS/SECOND: 322.70
I know - its a small system, spinners are slow, latencies etc.
But how could i get best ceph-performance ?
Use the NVME as "normal" OSD with WAL on itself (same as the hdds)?
Use the NVME as WAL-device for the hdds? (Problem: if the NVME is broken, 8 HDDs are lost?)
Is it worth to add 2 more SSDs per node for WAL? (so 1 SSD for 4 HDD) or use them as OSDs aswell?
Need more RAM?
1 Monitor running on each node (i could move them to other machines - if recommended)
Would a fourth node give considerably better performance?
Still no Jumbo-Frames, is it really bad?
And still no link aggregation, could be possible.
If i start rados bench and monitor the node with atop on the node, ETH gets up to 50% used, several disks varying up to 90% usage - knowing that the WAL ist on same disk.
But: WAL is "small" - and luminous made for WAL on same disk !?
Would crush take different performance (classes hdd, ssd, nvme) in consideration when optimizing?
Recommendations welcome.
Last edited: