Hello,
I've got a quite basic setup:
3 HPEs DL380 servers with 5 nvme disks and a 10 GbE card.
I'm building a cluster of those 3 servers with CEPH storage.
the 10 GbE connectivity is conformed ok through iperf3
Ceph is set up on the 10 GbE network, rados bench sound ok. In a first look, I'd really say connectivity doesnt look like a problem.
though, when im doing VMs on the ceph storage, benches show that read/write dont go much more above 220 MB/s which should be way more.
is there some things to be done to reach full disk performance?
thanks a lot
Fab
I've got a quite basic setup:
3 HPEs DL380 servers with 5 nvme disks and a 10 GbE card.
I'm building a cluster of those 3 servers with CEPH storage.
the 10 GbE connectivity is conformed ok through iperf3
Ceph is set up on the 10 GbE network, rados bench sound ok. In a first look, I'd really say connectivity doesnt look like a problem.
though, when im doing VMs on the ceph storage, benches show that read/write dont go much more above 220 MB/s which should be way more.
is there some things to be done to reach full disk performance?
thanks a lot
Fab