rados uses 16 threads by default.
KVM provides a single IO thread.
So I ran rados with one thread, its performance is nearly identical to what I see inside the VM.
It to drastically improves if the data is already in the cache on the CEPH server.
Read first time = slow
Re-read = really fast
The cause of the slow initial read is caused by the latency involved with reading the data from the disk.
This latency can be masked by using multiple IO threads.
I've thought of using multiple disks in the VM and do a software RAID0. Using rados bench three threads gives me about 100MB/sec read so three disks in the VM should be adequate to get an acceptable level of performance.
But Proxmox backup is still limited to one thread and is slow.
Is there anything I could tune in the CEPH server would would help improve single threaded IO?
SSD Cache Tier to mask the problem?
@spirit
I've never done anything with multipath, could you share some additional details?
Is this safe to do?
Seems like a much better idea than software RAID0 in the VM but still leaves Proxmox backup with only a single slow thread.
KVM provides a single IO thread.
So I ran rados with one thread, its performance is nearly identical to what I see inside the VM.
It to drastically improves if the data is already in the cache on the CEPH server.
Read first time = slow
Re-read = really fast
The cause of the slow initial read is caused by the latency involved with reading the data from the disk.
This latency can be masked by using multiple IO threads.
I've thought of using multiple disks in the VM and do a software RAID0. Using rados bench three threads gives me about 100MB/sec read so three disks in the VM should be adequate to get an acceptable level of performance.
But Proxmox backup is still limited to one thread and is slow.
Is there anything I could tune in the CEPH server would would help improve single threaded IO?
SSD Cache Tier to mask the problem?
@spirit
Thats from another thread https://forum.proxmox.com/threads/vm-lockups-with-ceph.20348/#post-103774one thing possible currently, assign same disk multiple time, and do some multipathing inside guest.
like this I have been able to reach 90000iops with 1 disk. (3x virtio disk 30000iops each + iothreads + krbd).
I've never done anything with multipath, could you share some additional details?
Is this safe to do?
Seems like a much better idea than software RAID0 in the VM but still leaves Proxmox backup with only a single slow thread.
I was getting ready to post the above when you said this.this is because the VM use one thread only - if you start rados bench with one thread only (default is 16) the result looks perhaps similiar.