[SOLVED] VM Disk over ceph so slow, But rbd bench is good.

jic5760

Member
Nov 10, 2020
40
8
13
26
RBD Bench:
```
# rbd bench --io-total=1G --io-size=1M --io-type read --io-threads 1 --io-pattern seq .../vm-1004-disk-0
bench type read io_size 1048576 io_threads 1 bytes 1073741824 pattern sequential
SEC OPS OPS/SEC BYTES/SEC
1 138 144.19 151191864.43
2 355 184.64 193612603.07
3 632 218.27 228873951.46
4 838 217.13 227675080.51
elapsed: 4 ops: 1024 ops/sec: 223.97 bytes/sec: 234846661.28
```
It seems good. (234MB/s)

But if the same disk attaches to VM, poor performance.

```
# dd if=/dev/sdb of=/dev/null bs=(BLOCK_SIZE) status=progress
```
I tested increasing the block size from 512 to 4MB.
However, it is up to 22MB/s.
The same is true for Windows.

I using VirtIO SCSI.

Guest os is FreeBSD. So I can't find read_ahead_cache option.

---

With the "cache = Write through, Discard = 1" option,
On a Windows guest, I get 200MB/s or more with the command "dd ... iflag=direct bs=4194304".
In FreeBSD, it is difficult to exceed 20MB/s because increasing bs doesn't increase the speed.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!