Hello all,
Can anybody help me understand the weird performance running on proxmox ceph cluster ?
I'm having jmeter iops/sec performance test with my proxmox ceph cluster;
- 3nodes cluster with ceph (proxmox 8.3
- I have 4 networks, All nics are 10G SFP, and 2nics are bonded for each network,
- mgmt/vm guest/ceph cluster/ceph public networks are using bonded 2nics(bonding active/backup) per host,
- I just created 2 VMs,
- one vm is windows vm sending traffic using jmeter
- one vm is rocky linux vm receiving the jmeter traffic from window vm.
- windows vm and rocky linux vm are running on ceph storage.
- as this is testing environment, there is no other VMs, no other network traffic.
I tried various combination of parameters(for ceph, proxmox, vm),
latency and response time is ok, but iops/sec is problem.
if i moved disk from ceph to local storage, iops/sec is around 1700,
but if i moved disk from local storage to ceph storage, iops/sec is around 750~800 iops/sec,
(current ceph is default 3 replication)
My question is ,
Is it the limitation of ceph architecture as it replicate 3 times of each I/O ?
Thanks
Can anybody help me understand the weird performance running on proxmox ceph cluster ?
I'm having jmeter iops/sec performance test with my proxmox ceph cluster;
- 3nodes cluster with ceph (proxmox 8.3
- I have 4 networks, All nics are 10G SFP, and 2nics are bonded for each network,
- mgmt/vm guest/ceph cluster/ceph public networks are using bonded 2nics(bonding active/backup) per host,
- I just created 2 VMs,
- one vm is windows vm sending traffic using jmeter
- one vm is rocky linux vm receiving the jmeter traffic from window vm.
- windows vm and rocky linux vm are running on ceph storage.
- as this is testing environment, there is no other VMs, no other network traffic.
I tried various combination of parameters(for ceph, proxmox, vm),
latency and response time is ok, but iops/sec is problem.
if i moved disk from ceph to local storage, iops/sec is around 1700,
but if i moved disk from local storage to ceph storage, iops/sec is around 750~800 iops/sec,
(current ceph is default 3 replication)
My question is ,
Is it the limitation of ceph architecture as it replicate 3 times of each I/O ?
Thanks