[SOLVED] kvm read ceph single thread limit ?

gz_jax

New Member
Mar 18, 2020
6
1
3
35
Hello :
I recently encountered a problem with random read iops when performing performance tests.
The scene I built is: pve5.4 + ceph 12.2.12
3 hosts, each host 1ssd + 3hdd (ssd is used for db and wal, hdd is osd)
KVM's disk will enable cache = write back mode, and select VirtIO SCSI single
Zh
Delay in testing random reads, iodeph = 1, numjobs = 1, the average read iops is 200+
As shown below:
1.png

Delay in testing random reads, iodeph = 2, numjobs = 1, average read iops is 500+
As shown below:
2.png

Delay in testing random reads, iodeph = 1, numjobs = 2, the average read iops is 500+
As shown below:
3.png

After testing, it is found that single io and single-threaded iops are fixed at a certain value. Is this normal? If it is not normal, how to adjust it to improve it? For example, adjusting the kvm configuration, the operating system or ceph itself?

thank a lot!
 
The scene I built is: pve5.4 + ceph 12.2.12
Please upgrade, Proxmox VE 5.4 and Ceph Luminous will go EoL soon.
https://pve.proxmox.com/wiki/FAQ

After testing, it is found that single io and single-threaded iops are fixed at a certain value. Is this normal? If it is not normal, how to adjust it to improve it? For example, adjusting the kvm configuration, the operating system or ceph itself?
Please describe your setup in detail. How do the complete test results look like? And did you test Ceph itself, rados bench?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!