Tuning performance in VM with sceduler

I tried to upgrade as spirit has written, but unfortunately no difference in performance at all. But it is just two osds on every of two nodes.
 
I tried to upgrade as spirit has written, but unfortunately no difference in performance at all. But it is just two osds on every of two nodes.

Do you have tried with my config ?

Code:
      debug lockdep = 0/0
        debug context = 0/0
        debug crush = 0/0
        debug buffer = 0/0
        debug timer = 0/0
        debug journaler = 0/0
        debug osd = 0/0
        debug optracker = 0/0
        debug objclass = 0/0
        debug filestore = 0/0
        debug journal = 0/0
        debug ms = 0/0
        debug monc = 0/0
        debug tp = 0/0
        debug auth = 0/0
        debug finisher = 0/0
        debug heartbeatmap = 0/0
        debug perfcounter = 0/0
        debug asok = 0/0
        debug throttle = 0/0




        osd_op_threads = 5
        filestore_op_threads = 4








        osd_op_num_threads_per_shard = 1
        osd_op_num_shards = 25
        filestore_fd_cache_size = 64
        filestore_fd_cache_shards = 32




        ms_nocrc = true
        ms_dispatch_throttle_bytes = 0




        cephx sign messages = false
        cephx require signatures = false




[osd]
         keyring = /var/lib/ceph/osd/ceph-$id/keyring
         osd_client_message_size_cap = 0
         osd_client_message_cap = 0
         osd_enable_op_tracker = false


giant improvement is here:

Code:
        osd_op_threads = 5
        filestore_op_threads = 4
        osd_op_num_threads_per_shard = 1
        osd_op_num_shards = 25
        filestore_fd_cache_size = 64
        filestore_fd_cache_shards = 32
 
Yes, I put it in my configs as soon as you had written here.

Maybe there is something bad in my setup. I tried this command on one of the storage nodes: rados -p ssd bench 300 write -b 4194304 -t 1 --no-cleanup with results around 112MB/s . Iperf is showing around 9,4gb/s.
Which scheduler do you use on hosts?
 
Yes, I put it in my configs as soon as you had written here.

Maybe there is something bad in my setup. I tried this command on one of the storage nodes: rados -p ssd bench 300 write -b 4194304 -t 1 --no-cleanup with results around 112MB/s . Iperf is showing around 9,4gb/s.
Which scheduler do you use on hosts?

try to increase -t value (number of threads)
 
It was for 4m block, for 128k is 70MB/s. I used command rados -p ssd bench 300 write -b 131072 -t 5 --no-cleanup
For 4m block are cpus around 18%, with 128k around 50% and around 100% with 8k
 
It was for 4m block, for 128k is 70MB/s. I used command rados -p ssd bench 300 write -b 131072 -t 5 --no-cleanup

Oh sorry, I have misread. This is strange than you can't write more than that 160MB/S, I think you could reach a little more

Keep in mind, than you need to write data twices, journal + datas. So, if 1 disk can reach 500MB/S, you can write 250MB/S max by disk.

what is the replication level of your pool ? (you have only 2 disk to test, so maybe try with replication x1, to see how many bandwidth you can reach by writing to the 2disks)


[/QUOTE]For 4m block are cpus around 18%, with 128k around 50% and around 100% with 8k[/QUOTE]

Ok, that's seem to be normal, the more ios , the more cpus.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!