Concern with Ceph IOPS despite having enterprise NVMe drives

300k iops at 4k is around 10Gbits/s. I don't known if the full mesh network is able to balance correctly traffic across both nics ?

but anyway, 10gbits is pretty low . (1nvme can reach 10gbit/s). so you need ~50gbits minimum for full speed.

also note that read use less cpu than write, so it should be faster too. (and for write, you can enable writeback cache in the vm, it's help a lot too)
I have 10Gbps link x2. configured with LACP. In future it will be 2 x25Gbps. The NICs will be connected with AOC cable. will it be sufficient?
 
@bsinha: it all depends on your application and your infrastructure and other overheads. Most applications don’t do 4kB writes, the Windows SMB server does everything asynchronous for example. The question is what is your application “doing”, what load are you expecting across your servers (you’re probably not having 3 servers to run 1 VM) and what is that combined access pattern.

You have theoretical maximums baked into your setup, you have a worst case scenario minimum of 8k IOPS and a potential of going to a bit over 100k in the right circumstances with multiple clients having deep queues.
 
@bsinha: it all depends on your application and your infrastructure and other overheads. Most applications don’t do 4kB writes, the Windows SMB server does everything asynchronous for example. The question is what is your application “doing”, what load are you expecting across your servers (you’re probably not having 3 servers to run 1 VM) and what is that combined access pattern.

You have theoretical maximums baked into your setup, you have a worst case scenario minimum of 8k IOPS and a potential of going to a bit over 100k in the right circumstances with multiple clients having deep queues.
Thanks for the response. Actually we are going to run bunch of Database servers. OLTP and OLAP both. So I was littlebit concerned about the IOPS. However, I tried diskspd in 2 different windows 2016 VMs at the same time. The command I used - diskspd -t8 -o64 -b4k -r4k -w100 -d120 -Sh -D -L -c5G ./IO.dat. I got 50K write IOPs from each of the servers. Total the ceph cluster is giving me 100K IOPS. Can we go more?