cephfs speed

coolcat1975

Renowned Member
Mar 18, 2013
26
2
68
hallo zusammen!

habe einen 3 node cluster mit aktuellem 5.2 und ceph

habe cephfs nach installiert und läuft auch soweit.

auf den nodes hab ich dann lokal den cephfs pool gemountet.

wenn ich jetzt auf diesen pool ein backup einer vm mache, hab ich datenraten von rund 120Mb/s

das clusternetz ist allerdings ein 10Gb netz

mir kommt das langsam vor für 10Gb im hintergrund. ich hätte mit 500-700 Mb/s gerechnet

festplatten sind sas spinner, die 3 nodes sind ident

ceph einstellung sind standard

lg

karl
 
500 mb mit so wenigen spinnern sind wohl nicht möglich. welche schreibraten zeigt ein rados benchmark?

> rados -p POOL bench 60 write
 
Total time run: 60.131758
Total writes made: 14312
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 952.043
Stddev Bandwidth: 138.221
Max bandwidth (MB/sec): 1124
Min bandwidth (MB/sec): 596
Average IOPS: 238
Stddev IOPS: 34
Max IOPS: 281
Min IOPS: 149
Average Latency(s): 0.06721
Stddev Latency(s): 0.0662791
Max latency(s): 2.31694
Min latency(s): 0.011011
Cleaning up (deleting benchmark objects)
Removed 14312 objects
Clean up completed and total clean up time :3.696276

Bei RBD ist alles ok. cephfs ist langsam

Es sind 10 sas spinner=10 OSD pro pve node gesamt 30 OSD's
 
The write bandwidth is only half the battle. I suppose the cephfs is on the same cluster as your RBD images, making the backup do a read from ceph and a write onto cephfs, that in turn writes to ceph again. Then the online backup needs to read from the VM before it is overwritten, so you have 1x read and 2x writes [1]. And AFAIR, the block size is 4K, that are written sequential.

To find out what cephfs is capable of, use fio [2]. That way you can get the cephfs write bandwidth without the extra steps from above. You can find the parameters to use in our Ceph Benchmark paper [3]. As vzdump writes with sync, a follow up test should reflect this to get a better estimation what the write speed could look like.

Further, compression (if not already set) could increase the backup speed.

Another test for comparison would be to do a backup of the same VM onto a local storage and from local to the cephfs storage. This way differences of the storage could be checked.

[1] https://git.proxmox.com/?p=pve-qemu...d;hb=dcfd9c72bc5bb92f7715f7eb52e6610bc629a1c8
[2] https://packages.debian.org/en/stretch/fio
[3] https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/
 
Your are right: cephfs is on the same as RBD

Could another approach be to make a dedicated cpehfs pool by modyfing the crushmap, so that the cephfs has it's "own" OSD''s?
 
Yes. This might speed up the Backup but will probably affect the overall speed of your cluster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!