This thread is not about any issue. Just sharing some benchmark results of one of my Proxmox CEPH setup with recently added HDDs.
There are 4 benchmarks with following setups. All benchmarks were done on same cluster with same nodes, with only difference in number of HDDs and SSD used:
The Setup:
========
Total Nodes : 7; 3 CEPH, 4 Proxmox. All HDDs are in 3 CEPH nodes and none in Proxmox nodes.
Network : 1 Gigabit; CEPH and VM traffic separated.
RAM : 32GB in each nodes.
# of CEPH Replica : 3
# of PG : 768
Type of HDDs for CEPH : Seagate desktops
Type of SSDs for CEPH : Kingston KC300
CEPH Journal Location : Co-located with on each HDDs
Benchmark command used :
# rados -p test bench -b <block_size> <secondsToRun> <seq/write> -t <numberOfThreads> --no-cleanup
blocksize used in command : 4096, 131072, 4194304
secondToRun used in command : 300
numberOfThreads used in command : 32
First bechmark : 6 OSDs on 6 SSDs, no HDDs
Second benchmark : 6 OSDs on only 6 HDDs, no SSDs
Third Benchmark : 8 OSDs on only 8 HDDs, no SSDs
Fourth Benchmark : 26 OSDs on only 26 HDDs, no SSDs
Of course this is not very exhaustive test. But it shows the logic that higher the number of OSD the faster CEPH performs. I stuck with desktop class HDDs to see the performance possible at lowest cost. Also i had the HDDs on hand. With 128MB cache Enterprise HDDs of course the performance will be higher with ratio being the same. I can already see big performance jump in all running VMs with 26 OSDs.
Anybody has some benchmark results to share with different hardware platforms they running?
There are 4 benchmarks with following setups. All benchmarks were done on same cluster with same nodes, with only difference in number of HDDs and SSD used:
The Setup:
========
Total Nodes : 7; 3 CEPH, 4 Proxmox. All HDDs are in 3 CEPH nodes and none in Proxmox nodes.
Network : 1 Gigabit; CEPH and VM traffic separated.
RAM : 32GB in each nodes.
# of CEPH Replica : 3
# of PG : 768
Type of HDDs for CEPH : Seagate desktops
Type of SSDs for CEPH : Kingston KC300
CEPH Journal Location : Co-located with on each HDDs
Benchmark command used :
# rados -p test bench -b <block_size> <secondsToRun> <seq/write> -t <numberOfThreads> --no-cleanup
blocksize used in command : 4096, 131072, 4194304
secondToRun used in command : 300
numberOfThreads used in command : 32
First bechmark : 6 OSDs on 6 SSDs, no HDDs
Second benchmark : 6 OSDs on only 6 HDDs, no SSDs
Third Benchmark : 8 OSDs on only 8 HDDs, no SSDs
Fourth Benchmark : 26 OSDs on only 26 HDDs, no SSDs
Of course this is not very exhaustive test. But it shows the logic that higher the number of OSD the faster CEPH performs. I stuck with desktop class HDDs to see the performance possible at lowest cost. Also i had the HDDs on hand. With 128MB cache Enterprise HDDs of course the performance will be higher with ratio being the same. I can already see big performance jump in all running VMs with 26 OSDs.
Anybody has some benchmark results to share with different hardware platforms they running?
Last edited: