Hi, we have three servers, running Proxmox 6.
The three nodes are identical.
We want upgrade CEPH network 10G with this.
1 x Cisco Nexus N3K-C3064PQ-10GE - https://www.cisco.com/c/en/us/produ...00-series-switches/data_sheet_c78-651097.html
3 x Intel X520-DA2 10G Dual Port SFP+ PCI-E 2.0 x8 Server Adapter
3 x Passive Direct Attach Copper Twinax Cable - https://www.fs.com/products/40109.html
Will this work ? Any changes ?
Actual benchs.
rados bench -p ceph-fs 120 write --no-cleanup
The three nodes are identical.
- 2 x 24 x Intel(R) Xeon(R) CPU E5645
- More than 96 GB RAM per node.
- 1 x Samsung SSD 860 EVO 250GB (Proxmox installation)
- 1 x NVME Samsung SSD 970 EVO 250GB (4 x 48 GB for DB/WALL)
- 4 x 2 TB 7200 rpm Western Digital Drives OSDs
We want upgrade CEPH network 10G with this.
1 x Cisco Nexus N3K-C3064PQ-10GE - https://www.cisco.com/c/en/us/produ...00-series-switches/data_sheet_c78-651097.html
3 x Intel X520-DA2 10G Dual Port SFP+ PCI-E 2.0 x8 Server Adapter
3 x Passive Direct Attach Copper Twinax Cable - https://www.fs.com/products/40109.html
Will this work ? Any changes ?
Actual benchs.
rados bench -p ceph-fs 120 write --no-cleanup
Code:
Total time run: 120.518
Total writes made: 2433
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 80.7514
Stddev Bandwidth: 10.4654
Max bandwidth (MB/sec): 104
Min bandwidth (MB/sec): 44
Average IOPS: 20
Stddev IOPS: 2.61635
Max IOPS: 26
Min IOPS: 11
Average Latency(s): 0.792256
Stddev Latency(s): 0.485577
Max latency(s): 3.989
Min latency(s): 0.180162