Hi,
I'm running some tests against ceph rbd full SSD pool from inside a VM and I'm getting quite strange results.
I'm using fio tool to perform the tests, and I'm running tests using the following command line:
fio --name=randread-posix --output ./test --runtime 60 --ioengine=posixaio
--buffered=0 --direct=1 --rw=randread --bs=4k --size=1024m --iodepth=32
fio --name=randread-libaio --output ./test --runtime 60 --ioengine=libaio
--buffered=0 --direct=1 --rw=randread --bs=4k --size=1024m --iodepth=32
The first one, uses posix aio, and I'm only getting 1k IOPS, but with the second one using libaio I'm getting up to 32k IOPS.
I'm seeing a huge difference in performance between posix aio and libaio, and I cannot really find an explanation. Looks like there is some bottleneck in QEMU or librbd I cannot identify.
I'm using proxmox 4 to run the VM (Ceph Hammer librbd and KVM 2.7).
Any help or ideas are welcome
Thanks!
I'm running some tests against ceph rbd full SSD pool from inside a VM and I'm getting quite strange results.
I'm using fio tool to perform the tests, and I'm running tests using the following command line:
fio --name=randread-posix --output ./test --runtime 60 --ioengine=posixaio
--buffered=0 --direct=1 --rw=randread --bs=4k --size=1024m --iodepth=32
fio --name=randread-libaio --output ./test --runtime 60 --ioengine=libaio
--buffered=0 --direct=1 --rw=randread --bs=4k --size=1024m --iodepth=32
The first one, uses posix aio, and I'm only getting 1k IOPS, but with the second one using libaio I'm getting up to 32k IOPS.
I'm seeing a huge difference in performance between posix aio and libaio, and I cannot really find an explanation. Looks like there is some bottleneck in QEMU or librbd I cannot identify.
I'm using proxmox 4 to run the VM (Ceph Hammer librbd and KVM 2.7).
Any help or ideas are welcome
Thanks!