Concern with Ceph IOPS despite having enterprise NVMe drives

300k iops at 4k is around 10Gbits/s. I don't known if the full mesh network is able to balance correctly traffic across both nics ?

but anyway, 10gbits is pretty low . (1nvme can reach 10gbit/s). so you need ~50gbits minimum for full speed.

also note that read use less cpu than write, so it should be faster too. (and for write, you can enable writeback cache in the vm, it's help a lot too)
I have 10Gbps link x2. configured with LACP. In future it will be 2 x25Gbps. The NICs will be connected with AOC cable. will it be sufficient?
 
@bsinha: it all depends on your application and your infrastructure and other overheads. Most applications don’t do 4kB writes, the Windows SMB server does everything asynchronous for example. The question is what is your application “doing”, what load are you expecting across your servers (you’re probably not having 3 servers to run 1 VM) and what is that combined access pattern.

You have theoretical maximums baked into your setup, you have a worst case scenario minimum of 8k IOPS and a potential of going to a bit over 100k in the right circumstances with multiple clients having deep queues.