Proxmox Ceph Performance

parker0909

Well-Known Member
Aug 5, 2019
98
0
46
37
Hello,

I’m running Proxmox VE 8.4** with Ceph (Squid / Reef) on 3 identical nodes. All OSDs are SATA SSDs using BlueStore and **direct HBA passthrough (no RAID). Yet the **`osd_mclock_max_capacity_iops_ssd`** values are different — some are locked at **~3.7k IOPS**, others at **~41k IOPS**.


This is causing **severe Apply/Commit latency spikes (37–70 ms)** on the low-IOPS OSDs, even under light load.

May i know it is any method to solve the latency and osd_mclock_max_capacity_iops_ssd problem? Thank you.

Parker
 

Attachments

  • ssd.png
    ssd.png
    95.3 KB · Views: 17
Tell Ceph to benchmark those drives again on OSD start and restart the service when appropriate:

Code:
ceph config set osd osd_mclock_force_run_benchmark_on_init true

There's also another ceph tell like command to run a benchmark right now, but I don't remember it and may also be affected by real I/O on the drive. I would also remove the OSD from one disk and run fio benchmarks to find out the real performance of the drive (may be slow for some reason and Ceph just show whatever performance it can get from that disk).

IMHO it's the other way around: you see latency spikes because the disks are slow/misbehaving.
 
Last edited:
  • Like
Reactions: bl1mp and gurubert
Thank you.
I have tried to do the fio test and it seem result is normal for SATA SSD.
read: IOPS=30.7k, BW=120MiB/s (126MB/s)(3602MiB/30001msec)
read: IOPS=30.7k, BW=120MiB/s (126MB/s)(3602MiB/30001msec)

Three OSDs with two different model.
2 x Intel SSDSC2KB019T8R
1 x HPE MK001920GWCFB

Thank you.
Parker
 
read: IOPS=30.7k, BW=120MiB/s (126MB/s)(3602MiB/30001msec)
read: IOPS=30.7k, BW=120MiB/s (126MB/s)(3602MiB/30001msec)
That data means little if you don't post the exact fio test you ran. AFAIR, the benchmark that Ceph does is a 4k write bench to find out the IOps capacity of the drive. You should bench that with fio. Also, I would run the same bench on a host/disk that seems to provide proper performance and compare.