I tried to upgrade as spirit has written, but unfortunately no difference in performance at all. But it is just two osds on every of two nodes.
debug lockdep = 0/0
debug context = 0/0
debug crush = 0/0
debug buffer = 0/0
debug timer = 0/0
debug journaler = 0/0
debug osd = 0/0
debug optracker = 0/0
debug objclass = 0/0
debug filestore = 0/0
debug journal = 0/0
debug ms = 0/0
debug monc = 0/0
debug tp = 0/0
debug auth = 0/0
debug finisher = 0/0
debug heartbeatmap = 0/0
debug perfcounter = 0/0
debug asok = 0/0
debug throttle = 0/0
osd_op_threads = 5
filestore_op_threads = 4
osd_op_num_threads_per_shard = 1
osd_op_num_shards = 25
filestore_fd_cache_size = 64
filestore_fd_cache_shards = 32
ms_nocrc = true
ms_dispatch_throttle_bytes = 0
cephx sign messages = false
cephx require signatures = false
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
osd_client_message_size_cap = 0
osd_client_message_cap = 0
osd_enable_op_tracker = false
osd_op_threads = 5
filestore_op_threads = 4
osd_op_num_threads_per_shard = 1
osd_op_num_shards = 25
filestore_fd_cache_size = 64
filestore_fd_cache_shards = 32
Yes, I put it in my configs as soon as you had written here.
Maybe there is something bad in my setup. I tried this command on one of the storage nodes: rados -p ssd bench 300 write -b 4194304 -t 1 --no-cleanup with results around 112MB/s . Iperf is showing around 9,4gb/s.
Which scheduler do you use on hosts?
I tried -t 5 and now it is around 160MB/s and 130MB/s for t 10.
It was for 4m block, for 128k is 70MB/s. I used command rados -p ssd bench 300 write -b 131072 -t 5 --no-cleanup