Hi,
i am testing performance PVE6 with ceph with 2 nodes (2/2) and 2 ssd OSDs per node, network is shared 1x10Gbps for all traffic on PVEs (just for small test).
HW: 2x HP DL380p G8, every node has 2x E5-2690 2.9GHz, 96GB RAM, SmartArray P430, 1x Intel S4610 2TB, 1x Kingston DC500m, 10Gbps optical. PVE is installed on dedicated ssds. Ceph OSDs are as raid0 no P430 - 1x ssd per bluestore OSD. So 4x OSDs total on 2 nodes 2/2 replica.
Now, some ceph fio tests from VM (Debian10 defaults), all VMs on same host:
1x VM: read 28k, write 10k, readwrite 14k/5k iops <-- thats acceptalbe
1x VM (nfs server) + 1x VM (nfs client): read 11.1k, write 0.8k, readwrite 1.8k/0.6k iops (from client) <-- this looks very bad
For example drbd (VMs as raw files):
1x VM: read 45k, write 21k, readwrite 28/10 <-- thats semiexpected
1x VM (nfs server) + 1x VM (nfs client): read 31k, write 4.8k, readwrite 11.6/3.8k (from client) <-- thats acceptable
I need shared fs (at least samba, nfs is better option), but those tests are showing very bad iops for writes from nfs client to nfs server on ceph...any idea, how to make it better? I have feeling that even samba will be very slow on ceph...
Thanks.
i am testing performance PVE6 with ceph with 2 nodes (2/2) and 2 ssd OSDs per node, network is shared 1x10Gbps for all traffic on PVEs (just for small test).
HW: 2x HP DL380p G8, every node has 2x E5-2690 2.9GHz, 96GB RAM, SmartArray P430, 1x Intel S4610 2TB, 1x Kingston DC500m, 10Gbps optical. PVE is installed on dedicated ssds. Ceph OSDs are as raid0 no P430 - 1x ssd per bluestore OSD. So 4x OSDs total on 2 nodes 2/2 replica.
Now, some ceph fio tests from VM (Debian10 defaults), all VMs on same host:
1x VM: read 28k, write 10k, readwrite 14k/5k iops <-- thats acceptalbe
1x VM (nfs server) + 1x VM (nfs client): read 11.1k, write 0.8k, readwrite 1.8k/0.6k iops (from client) <-- this looks very bad
For example drbd (VMs as raw files):
1x VM: read 45k, write 21k, readwrite 28/10 <-- thats semiexpected
1x VM (nfs server) + 1x VM (nfs client): read 31k, write 4.8k, readwrite 11.6/3.8k (from client) <-- thats acceptable
I need shared fs (at least samba, nfs is better option), but those tests are showing very bad iops for writes from nfs client to nfs server on ceph...any idea, how to make it better? I have feeling that even samba will be very slow on ceph...
Thanks.