QEMU/KVM + Ceph Librbd Performance tuning

Hmm, I gave it a quick glance and a lot of these things are either default or can be easily done. No cache for the disk drive for example, or using IO threading for each disk image of the VM.

The librbd linked by qemu:
Code:
root@cephtest1:~# ldd /usr/bin/kvm  | grep librbd
    librbd.so.1 => /lib/librbd.so.1 (0x00007f89397a3000
depends on the installed version. On my test system that is currently running Pacific (16):
Code:
root@cephtest1:~# ls -la /lib/librbd.so.1
lrwxrwxrwx 1 root root 16 Oct 19 07:27 /lib/librbd.so.1 -> librbd.so.1.16.0

The allocator could be one thing that could improve the performance. But I cannot answer that right away.
 
The allocator could be one thing that could improve the performance. But I cannot answer that right away.

Just wondering whether changing allocator would have an impact on QEMU performance or not?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!