I need some input on tuning performance on a new cluster I have setup
The new cluster has 2 pool (one for HDDs and one for SSDs). For now it's only three nodes.
I have separate networks: 1 x 1Gb/s NIC for corosync, 2 x bonded 1GB/s NICs for ceph and 1 x 1GB/s NIC for the Proxmox bridged VM's and LXC's.
The SDD pool has the following performance:
Now, I also have an old Proxmox 4 jewel-based ceph cluster with old SAS HDD's that gives me this performance:
What is happening, or am I hitting the 1Gb/s network bottleneck here?
The new cluster has 2 pool (one for HDDs and one for SSDs). For now it's only three nodes.
I have separate networks: 1 x 1Gb/s NIC for corosync, 2 x bonded 1GB/s NICs for ceph and 1 x 1GB/s NIC for the Proxmox bridged VM's and LXC's.
The SDD pool has the following performance:
Code:
# rados bench -p fast 60 rand -t 1
Total time run: 60.067481
Total reads made: 2506
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 166.879
Average IOPS: 41
Stddev IOPS: 4
Max IOPS: 53
Min IOPS: 33
Average Latency(s): 0.0235738
Max latency(s): 0.0469368
Min latency(s): 0.0011793
Now, I also have an old Proxmox 4 jewel-based ceph cluster with old SAS HDD's that gives me this performance:
Code:
# rados bench -p IMB-test 60 rand -t 1
Total time run: 60.179736
Total reads made: 788
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 52.3764
Average IOPS: 13
Stddev IOPS: 2
Max IOPS: 19
Min IOPS: 7
Average Latency(s): 0.0756719
Max latency(s): 0.341299
Min latency(s): 0.00517604
What is happening, or am I hitting the 1Gb/s network bottleneck here?