Hi,
We are running a 5-node proxmox ceph cluster. Among them 3 of them are having ssd drives which is making a pool called ceph-ssd-pool1. Following are the configuration:
Ceph Network: 10G
SSD drives are of: Kingston SEDC500M/1920G (Which they call it as Datacenter grade SSDs and claiming to get around 98K read and 70K write iops)
My rados benchmark shows Write IOPS around 3K, whereas read IOPS is 20K.
During the benchmark process, CPU does not go significantly high on the servers as well as the 10G ceph network does not take more than 4G traffic.
Why is this difference between the Write IOPS and Read IOPS? I would be more than happy if I get any suggestion on achieving 10K write iops.
Thanks in advanced.
We are running a 5-node proxmox ceph cluster. Among them 3 of them are having ssd drives which is making a pool called ceph-ssd-pool1. Following are the configuration:
Ceph Network: 10G
SSD drives are of: Kingston SEDC500M/1920G (Which they call it as Datacenter grade SSDs and claiming to get around 98K read and 70K write iops)
My rados benchmark shows Write IOPS around 3K, whereas read IOPS is 20K.
During the benchmark process, CPU does not go significantly high on the servers as well as the 10G ceph network does not take more than 4G traffic.
Why is this difference between the Write IOPS and Read IOPS? I would be more than happy if I get any suggestion on achieving 10K write iops.
Thanks in advanced.