Hi all
I have 3 nodes each with 4x256gb ssds.
I have set all up as a cluster and installed ceph. As this is a test setup everything shares the same 1gb internet connection over the internet.
When the clusters we setup with hardware raid 10 this worked great and migrating VMs between clusters was very good.
Next step is to test the ceph setup. So raid was disabled. Proxmox installed on 1 of the drives. The other 3 drives on each server was added as OSD to ceph. So 9 drives in cluster.
However, the benchmarks were terrible..
rados -p test bench 10 write --no-cleanup
Bandwidth (MB/sec): 32.9994
rados -p test bench 10 seq
Bandwidth (MB/sec): 83.1497
Installing a windows vm took about 2 hours. (it is still "getting ready as I type)
Just for laughs I created an identical using the local storage.. this installed in around 5 mins.
i/o delay hovers around 15%
Is this about right for a 1gb link or have I done something stupid? am I just missing the 10gb link between the nodes and everything will burst back to life?
I have 3 nodes each with 4x256gb ssds.
I have set all up as a cluster and installed ceph. As this is a test setup everything shares the same 1gb internet connection over the internet.
When the clusters we setup with hardware raid 10 this worked great and migrating VMs between clusters was very good.
Next step is to test the ceph setup. So raid was disabled. Proxmox installed on 1 of the drives. The other 3 drives on each server was added as OSD to ceph. So 9 drives in cluster.
However, the benchmarks were terrible..
rados -p test bench 10 write --no-cleanup
Bandwidth (MB/sec): 32.9994
rados -p test bench 10 seq
Bandwidth (MB/sec): 83.1497
Installing a windows vm took about 2 hours. (it is still "getting ready as I type)
Just for laughs I created an identical using the local storage.. this installed in around 5 mins.
i/o delay hovers around 15%
Is this about right for a 1gb link or have I done something stupid? am I just missing the 10gb link between the nodes and everything will burst back to life?