Performance decrease after Octopus upgrade

alyarb

Renowned Member
Feb 11, 2020
142
30
68
38
Just throwing this out there to see if anyone has experienced anything similar.

Under Nautilus, our Windows VMs were able to do about 1.5 GB/sec sequential read, and 1.0 GB/sec sequential write.

Under Nautilus, our rados bench was showing us 2.0 GB/s sequential read and write, and this was sustainable no matter how long I ran the test. The difference in Windows versus rados bench performance always struck me odd but nothing came of it.

After upgrading to Octopus, our rados bench numbers are about the same, but Windows sequential performance has dropped to about 900 MB/sec on read and 400 MB/sec on writes.

Are there new tunables relating to the RBD client? What should I be looking for?

Thanks for any hints or anecdotes
 
maybe try to add "bluefs_buffered_io=true" in ceph.conf.

this was true on nautilus by default, changed to false on octopus by default (because of a potential bug with swap). But this can impact some workload. Cephs devs are talking about to reset it by default to true soon.
 
Thanks. Can I just edit the file directly and let corosync take care of the rest, or is there a command I need to run to reload the conf on all nodes?

Do you think this explains the difference between Windows \ rbd and rados bench?
 
Last edited:
Thanks. Can I just edit the file directly and let corosync take care of the rest, or is there a command I need to run to reload the conf on all nodes?

Do you think this explains the difference between Windows \ rbd and rados bench?
you should restart osd services
 
tres bien, back above 1 GB/s on write.

removing the SSD emulation gave a small boost as well.