Ceph Performance inside Client

udo

Distinguished Member
Apr 22, 2009
5,977
199
163
Ahrensburg; Germany
Hi,
I have an 3-node ceph cluster with ssd-journal and 10GE-connection.
Inside an VM (wheezy) I got > 100MB/s troughput with dd - but if I cp (or dd) an big file on the same rbd-disk, the performance drop down to 12MB/s.
The networkperformance between host and ceph-nodes is 9.7 GB/s (iperf).

I have tried different cache options, different OS, different filesystems inside VM...

Any hint?

Udo
 
Hi,
I have an 3-node ceph cluster with ssd-journal and 10GE-connection.
Inside an VM (wheezy) I got > 100MB/s troughput with dd - but if I cp (or dd) an big file on the same rbd-disk, the performance drop down to 12MB/s.
The networkperformance between host and ceph-nodes is 9.7 GB/s (iperf).

I have tried different cache options, different OS, different filesystems inside VM...

Any hint?

Udo

maybe it's related to filesystem ? (but the overhead seem to be huge)
Try with a simple filesystem without journal like ext2 by example to compare.

also are your partition correctly aligned ? (alignment on 2048sector (1MB) is best for ssd).

Maybe Stefan Priebe can help you pve-devel mailing, he's using rbd in production and have done a lot of tests.
 
maybe it's related to filesystem ? (but the overhead seem to be huge)
Try with a simple filesystem without journal like ext2 by example to compare.

also are your partition correctly aligned ? (alignment on 2048sector (1MB) is best for ssd).

Maybe Stefan Priebe can help you pve-devel mailing, he's using rbd in production and have done a lot of tests.

Hi Spirit,
thanks for the answer.
The aligning of the ssd is ok (with parted 2048s as start) - I guess that the used lvm don't break the aligment?!

But anyway - I have done another tests, and if I read with one VM on a rbd-disk and write with another VM to another rbd-disk, the performance don't stall.

The same happens inside one VM if I change the scheduler from cfq to deadline! With this scheduler I got transferrates around 60MB/s (reading and writing). Not perfect but much better than 12MB/s.

I will do further test, try to speedup the rbd-storage.

Udo
 
Hi Spirit,
thanks for the answer.
The aligning of the ssd is ok (with parted 2048s as start) - I guess that the used lvm don't break the aligment?!

But anyway - I have done another tests, and if I read with one VM on a rbd-disk and write with another VM to another rbd-disk, the performance don't stall.

The same happens inside one VM if I change the scheduler from cfq to deadline! With this scheduler I got transferrates around 60MB/s (reading and writing). Not perfect but much better than 12MB/s.

I will do further test, try to speedup the rbd-storage.

Udo
For scheduler using SSD cfg is not recommended. noop or deadline is the recommended. Read a lot more here: https://wiki.archlinux.org/index.php/Solid_State_Drives

As regard to LVM then there is no problem since LVM will auto detect the alignment of the underlying disk. See
- md_chunk_alignment
- default_data_alignment
- data_alignment_detection
- data_alignment
- data_alignment_offset_detection

in /etc/lvm/lvm.conf
 
For scheduler using SSD cfg is not recommended. noop or deadline is the recommended. Read a lot more here: https://wiki.archlinux.org/index.php/Solid_State_Drives

As regard to LVM then there is no problem since LVM will auto detect the alignment of the underlying disk. See
- md_chunk_alignment
- default_data_alignment
- data_alignment_detection
- data_alignment
- data_alignment_offset_detection

in /etc/lvm/lvm.conf
Hi mir,
thanks for the info - I will try deadline tomorrow on the three ceph-nodes.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!