Many Thanks for your answer, I configured the CEPH from GUI, and the ceph.conf is as show bellow.
ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 172.27.111.0/24
fsid = 6a128c72-3400-430e-9240-9b75b0936015
keyring = /etc/pve/priv/$cluster.$name.keyring
mon allow pool delete = true
osd journal size = 5120
osd pool default min size = 2
osd pool default size = 3
public network = 172.27.111.0/24
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.STO1001]
host = STO1001
mon addr = 172.27.111.141:6789
[mon.STO1002]
host = STO1002
mon addr = 172.27.111.142:6789
[mon.STO1003]
host = STO1003
mon addr = 172.27.111.143:6789
The Infiniband is in separate network 10.10.111.0/24 and the public network is at 172.27.111.0/24, so I've to put the following?
cluster network = 10.10.111.0/24
public network = 172.27.111.0/24
host = STO1001
mon addr = 172.27.111.141:6789
host = STO1002
mon addr = 172.27.111.142:6789
host = STO1003
mon addr = 172.27.111.143:6789
With this modification the test bench is as follows:
rados bench -p SSDPool 60 write --no-cleanup
Total time run: 60.470899
Total writes made: 2858
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 189.05
Stddev Bandwidth: 24.8311
Max bandwidth (MB/sec): 244
Min bandwidth (MB/sec): 144
Average IOPS: 47
Stddev IOPS: 6
Max IOPS: 61
Min IOPS: 36
Average Latency(s): 0.338518
Stddev Latency(s): 0.418556
Max latency(s): 2.9173
Min latency(s): 0.0226615