Performance decrease after Octopus upgrade

alyarb

Well-Known Member
Feb 11, 2020
140
25
48
37
Just throwing this out there to see if anyone has experienced anything similar.

Under Nautilus, our Windows VMs were able to do about 1.5 GB/sec sequential read, and 1.0 GB/sec sequential write.

Under Nautilus, our rados bench was showing us 2.0 GB/s sequential read and write, and this was sustainable no matter how long I ran the test. The difference in Windows versus rados bench performance always struck me odd but nothing came of it.

After upgrading to Octopus, our rados bench numbers are about the same, but Windows sequential performance has dropped to about 900 MB/sec on read and 400 MB/sec on writes.

Are there new tunables relating to the RBD client? What should I be looking for?

Thanks for any hints or anecdotes
 
maybe try to add "bluefs_buffered_io=true" in ceph.conf.

this was true on nautilus by default, changed to false on octopus by default (because of a potential bug with swap). But this can impact some workload. Cephs devs are talking about to reset it by default to true soon.
 
Thanks. Can I just edit the file directly and let corosync take care of the rest, or is there a command I need to run to reload the conf on all nodes?

Do you think this explains the difference between Windows \ rbd and rados bench?
 
Last edited:
tres bien, back above 1 GB/s on write.

removing the SSD emulation gave a small boost as well.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!