Not maxing out ceph performance?

Aug 20, 2021
1
0
1
32
Not maxing out ceph performance?

I got a three node AMD epic 7702 installed in a Gigabyte R272-Z31 hyper converged proxmox cluster running. With a ceph storage cluster. The setup is as described in https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster. There is a separate ceph storage network of 40gbit setup as a full mesh network as described in https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server. In each node there are 20x Micron 5300 ssd 960GB installed and configured as 20 osd’s. And connected with a LSI 9300-8ii HBA card. So the ceph cluster has a total of 60 ssd’s installed.

When looking at the ceph benchmark 2018 document I don't get roughly the same performance as described in the document https://www.proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark. And i dont understand why, cause the ssd’s used in the benchmark are comparable to my ssd’s.

These are my results (There are a few vm’s running on this cluster):

Fio single disk
root@srv-pve3:~# fio --ioengine=libaio --filename=/dev/sdt --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=fio --output-format=terse,json,normal --output=fio.log --bandwidth-log

root@srv-pve:~# (1)][100.0%][w=98.7MiB/s][w=25.3k IOPS][eta 00m:00s]

srv-pve:~# iperf3 -c 10.10.10.1
Connecting to host 10.10.10.1, port 5201
[ 5] local 10.10.10.3 port 33248 connected to 10.10.10.1 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 2.16 GBytes 18.5 Gbits/sec 164 1.19 MBytes
[ 5] 1.00-2.00 sec 2.13 GBytes 18.3 Gbits/sec 299 1.23 MBytes
[ 5] 2.00-3.00 sec 2.20 GBytes 18.9 Gbits/sec 398 1.39 MBytes

srv-pve:~# rados bench 60 write -b 4M -t 16 --no-cleanup
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
60 16 2760 2744 182.909 0 - 0.324245
Total time run: 60.6379
Total writes made: 2760
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 182.064
Stddev Bandwidth: 161.395
Max bandwidth (MB/sec): 628
Min bandwidth (MB/sec): 0
Average IOPS: 45
Stddev IOPS: 40.3545
Max IOPS: 157
Min IOPS: 0
Average Latency(s): 0.351396
Stddev Latency(s): 1.03224
Max latency(s): 7.4158
Min latency(s): 0.019684


srv-pve:~# rados bench -p testbench 60 seq -T 16
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 244 228 911.787 912 0.0153362 0.0565442
2 16 448 432 863.844 816 0.0154637 0.0696878
3 16 653 637 849.195 820 0.131393 0.0727296
4 16 798 782 781.877 580 0.0144685 0.0793986
5 16 964 948 758.286 664 0.0156834 0.0799424
6 16 1202 1186 790.554 952 0.0132636 0.0775213
7 16 1427 1411 806.175 900 0.236145 0.0766306
8 16 1596 1580 789.895 676 0.0134987 0.0786389
9 16 1778 1762 783.007 728 0.0133519 0.0794539
10 16 1977 1961 784.279 796 0.200608 0.0791391
11 16 2195 2179 792.214 872 0.0141996 0.0769084
12 16 2415 2399 799.501 880 0.0141155 0.0789651
13 16 2589 2573 791.533 696 0.0141638 0.0799015
Total time run: 13.9748
Total reads made: 2760
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 789.993
Average IOPS: 197
Stddev IOPS: 28.4384
Max IOPS: 238
Min IOPS: 145
Average Latency(s): 0.0801533
Max latency(s): 3.35403
Min latency(s): 0.0121277

Fio inside vm (Cache = default (No cache)):
fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=random_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randwrite --numjobs=10
test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64

bs: 10 (f=10): [w(10)] [1.7% done] [0KB/53324KB/0KB /s] [0/13.4K/0 iops] [eta 16m:56s]

Is this what i can expect from this hardware or do you guys think there is a bottleneck somewhere? I do think that the IO and throughput performance could be better.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!