Hello,
I just built ceph with 3 node and 3 x 5tb 256 cache hard drive + SSD as Journal
Dual port NIC 10 GB port
Juniper switch 10GB port
bond0 are 2 x 10 GB Intel T copper card Mod Balance-tlb
i test the ceph very slow not sure why
root@ceph2:~# rados -p test bench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_ceph2_27590
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 42 26 103.993 104 0.15684 0.337711
2 16 52 36 71.9902 40 0.0943279 0.274127
3 16 52 36 47.9933 0 - 0.274127
4 16 52 36 35.9953 0 - 0.274127
5 16 52 36 28.7962 0 - 0.274127
93 16 52 36 1.5482 0 - 0.274127
94 16 52 36 1.53173 0 - 0.274127
95 16 52 36 1.5156 0 - 0.274127
96 16 52 36 1.49982 0 - 0.274127
97 16 52 36 1.48435 0 - 0.274127
98 16 52 36 1.46921 0 - 0.274127
99 10 53 43 1.73716 0.28866 98.0174 16.1888
2018-05-24 10:58:44.860008 min lat: 0.0784453 max lat: 98.5494 avg lat: 16.1888
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
100 10 53 43 1.71979 0 - 16.1888
Total time run: 100.116822
Total writes made: 53
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 2.11753
Stddev Bandwidth: 11.1046
Max bandwidth (MB/sec): 104
Min bandwidth (MB/sec): 0
Average IOPS: 0
Stddev IOPS: 2
Max IOPS: 26
Min IOPS: 0
Average Latency(s): 30.0458
Stddev Latency(s): 45.6515
Max latency(s): 100.116
Min latency(s): 0.0784453
root@ceph2:~#
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph: 12.2.5-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: not correctly installed
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-9
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
I just built ceph with 3 node and 3 x 5tb 256 cache hard drive + SSD as Journal
Dual port NIC 10 GB port
Juniper switch 10GB port
bond0 are 2 x 10 GB Intel T copper card Mod Balance-tlb
i test the ceph very slow not sure why
root@ceph2:~# rados -p test bench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_ceph2_27590
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 42 26 103.993 104 0.15684 0.337711
2 16 52 36 71.9902 40 0.0943279 0.274127
3 16 52 36 47.9933 0 - 0.274127
4 16 52 36 35.9953 0 - 0.274127
5 16 52 36 28.7962 0 - 0.274127
93 16 52 36 1.5482 0 - 0.274127
94 16 52 36 1.53173 0 - 0.274127
95 16 52 36 1.5156 0 - 0.274127
96 16 52 36 1.49982 0 - 0.274127
97 16 52 36 1.48435 0 - 0.274127
98 16 52 36 1.46921 0 - 0.274127
99 10 53 43 1.73716 0.28866 98.0174 16.1888
2018-05-24 10:58:44.860008 min lat: 0.0784453 max lat: 98.5494 avg lat: 16.1888
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
100 10 53 43 1.71979 0 - 16.1888
Total time run: 100.116822
Total writes made: 53
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 2.11753
Stddev Bandwidth: 11.1046
Max bandwidth (MB/sec): 104
Min bandwidth (MB/sec): 0
Average IOPS: 0
Stddev IOPS: 2
Max IOPS: 26
Min IOPS: 0
Average Latency(s): 30.0458
Stddev Latency(s): 45.6515
Max latency(s): 100.116
Min latency(s): 0.0784453
root@ceph2:~#
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph: 12.2.5-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: not correctly installed
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-9
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3