POSIX AIO Ceph performance

Xavier Trilla

New Member
Mar 10, 2017
1
0
1
46
Hi,

I'm running some tests against ceph rbd full SSD pool from inside a VM and I'm getting quite strange results.

I'm using fio tool to perform the tests, and I'm running tests using the following command line:

fio --name=randread-posix --output ./test --runtime 60 --ioengine=posixaio
--buffered=0 --direct=1 --rw=randread --bs=4k --size=1024m --iodepth=32

fio --name=randread-libaio --output ./test --runtime 60 --ioengine=libaio
--buffered=0 --direct=1 --rw=randread --bs=4k --size=1024m --iodepth=32

The first one, uses posix aio, and I'm only getting 1k IOPS, but with the second one using libaio I'm getting up to 32k IOPS.

I'm seeing a huge difference in performance between posix aio and libaio, and I cannot really find an explanation. Looks like there is some bottleneck in QEMU or librbd I cannot identify.

I'm using proxmox 4 to run the VM (Ceph Hammer librbd and KVM 2.7).

Any help or ideas are welcome :)

Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!