I have a 3 node cluster. Has a bunch of drives, 1TB cold rust, 512 warm SataSSD And three 512 non-PLP NVMe that are Gen3.
(1 Samsung SN730 and 2 Inland TN320)
I know not to expect much - this is pre-prod - plan is to eventually get PLPs next year.
10Gb Emulex CNA is working very well with FRR OSPF over ip6 but never get's over 3.6Gbit with these NVMes.
Writes are trash - like 10-20MBps - Win10 AsyncIO=threads,cache=non,discard/iothread=on ....tried 2/1 CEPH pool not much better.
I wanted to prove the concept, learn more about CEPH so I can speak to my ability of managing it (i.e. justify PLP costs).
I'd like to get more than 10MBps writes and sacrifice redundancy at the benefit of performance.
Plan would be to just backup weekly to local storage. Nothing is that critical - just learning mostly.
What performance tips are there for this purpose?
I know very little but of these where do you think the time is best spent? ...
NUMA node pinning, Disabling CPU sleep states, tuning individual OSDs, bluestore or pg tuning, RockDB / WAL, dropping 3/2 to 2/1?
For the sake of argument... Is there an easy way to flip the crush map to say stripe data over hosts instead of mirror?
Since I'm backing up to local ZFS or LVM I'm not concerned with the CEPH pool persistence or integrity.
Would glusterfs be better for this hardware?
(1 Samsung SN730 and 2 Inland TN320)
I know not to expect much - this is pre-prod - plan is to eventually get PLPs next year.
10Gb Emulex CNA is working very well with FRR OSPF over ip6 but never get's over 3.6Gbit with these NVMes.
Writes are trash - like 10-20MBps - Win10 AsyncIO=threads,cache=non,discard/iothread=on ....tried 2/1 CEPH pool not much better.
I wanted to prove the concept, learn more about CEPH so I can speak to my ability of managing it (i.e. justify PLP costs).
I'd like to get more than 10MBps writes and sacrifice redundancy at the benefit of performance.
Plan would be to just backup weekly to local storage. Nothing is that critical - just learning mostly.
What performance tips are there for this purpose?
I know very little but of these where do you think the time is best spent? ...
NUMA node pinning, Disabling CPU sleep states, tuning individual OSDs, bluestore or pg tuning, RockDB / WAL, dropping 3/2 to 2/1?
For the sake of argument... Is there an easy way to flip the crush map to say stripe data over hosts instead of mirror?
Since I'm backing up to local ZFS or LVM I'm not concerned with the CEPH pool persistence or integrity.
Would glusterfs be better for this hardware?
Last edited: