I have a new 3 node dedicated ceph cluster on pve5.1. I am connecting to 4 node PVE5.1 cluster with VMs over ceph public 10GB network with 2 bonded interfaces and 9000MTU size enables for private and public ceph networks.
I have 18 x same 1TB drives, 10K spinners which gives me 18 OSDs.
My 4 node PVE5.1 cluster also uses 10Gbps bonded interfaces 9000MTU size.
I am not sure what performance I should expect from this cluster, if anybody could compare and let me know I would appreciate it.
I did rados bench on/from the ceph cluster giving me about 650-700MBps max writes and from 1300-1500MBps max reads - following this page:
http://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance
...but I am not sure if it is a good result or not for the hardware in place
Also how to best test from the 4 node proxmox virtual machine cluster over the network ? I added a partition to a linux VM that resides on the rdb storage and used disk utility to benchmark - is there a better way ?
Thank you
I have 18 x same 1TB drives, 10K spinners which gives me 18 OSDs.
My 4 node PVE5.1 cluster also uses 10Gbps bonded interfaces 9000MTU size.
I am not sure what performance I should expect from this cluster, if anybody could compare and let me know I would appreciate it.
I did rados bench on/from the ceph cluster giving me about 650-700MBps max writes and from 1300-1500MBps max reads - following this page:
http://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance
...but I am not sure if it is a good result or not for the hardware in place
Also how to best test from the 4 node proxmox virtual machine cluster over the network ? I added a partition to a linux VM that resides on the rdb storage and used disk utility to benchmark - is there a better way ?
Thank you