ceph very slow perfomance

mada

Member
Aug 16, 2017
99
3
13
35
Hello,
I just built ceph with 3 node and 3 x 5tb 256 cache hard drive + SSD as Journal

Dual port NIC 10 GB port
Juniper switch 10GB port

bond0 are 2 x 10 GB Intel T copper card Mod Balance-tlb

i test the ceph very slow not sure why

root@ceph2:~# rados -p test bench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_ceph2_27590
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 42 26 103.993 104 0.15684 0.337711
2 16 52 36 71.9902 40 0.0943279 0.274127
3 16 52 36 47.9933 0 - 0.274127
4 16 52 36 35.9953 0 - 0.274127
5 16 52 36 28.7962 0 - 0.274127

93 16 52 36 1.5482 0 - 0.274127
94 16 52 36 1.53173 0 - 0.274127
95 16 52 36 1.5156 0 - 0.274127
96 16 52 36 1.49982 0 - 0.274127
97 16 52 36 1.48435 0 - 0.274127
98 16 52 36 1.46921 0 - 0.274127
99 10 53 43 1.73716 0.28866 98.0174 16.1888
2018-05-24 10:58:44.860008 min lat: 0.0784453 max lat: 98.5494 avg lat: 16.1888
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
100 10 53 43 1.71979 0 - 16.1888
Total time run: 100.116822
Total writes made: 53
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 2.11753
Stddev Bandwidth: 11.1046
Max bandwidth (MB/sec): 104
Min bandwidth (MB/sec): 0
Average IOPS: 0
Stddev IOPS: 2
Max IOPS: 26
Min IOPS: 0
Average Latency(s): 30.0458
Stddev Latency(s): 45.6515
Max latency(s): 100.116
Min latency(s): 0.0784453
root@ceph2:~#



proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph: 12.2.5-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: not correctly installed
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-9
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
 
>>Samsung PM853T
PM853T is the OEM version of the 845DC EVO . They sucks for sync write (2mbit/s at 4k block).

don't known for BX100, but I think it's like the mx series, they sucks too for sync write.


>>My concern that the my 5tb gives 180+mb/s without ceph!

how do you test that ?

if you can, test your different disk with sync write bench

fio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test

or

dd if=randfile of=/dev/sda bs=4k count=100000 oflag=direct,dsync





do you have tried without the journal cache on ssd ? (and use bluestore for osd). as I said,some sdd really sucks at sync writes, sometime slower than an hdd.

for journal, you can buy a small intel DC drive (s35xx,s36xx,s37XX). for example.
 
>>Samsung PM853T
PM853T is the OEM version of the 845DC EVO . They sucks for sync write (2mbit/s at 4k block).

don't known for BX100, but I think it's like the mx series, they sucks too for sync write.


>>My concern that the my 5tb gives 180+mb/s without ceph!

how do you test that ?

if you can, test your different disk with sync write bench

fio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test

or

dd if=randfile of=/dev/sda bs=4k count=100000 oflag=direct,dsync





do you have tried without the journal cache on ssd ? (and use bluestore for osd). as I said,some sdd really sucks at sync writes, sometime slower than an hdd.

for journal, you can buy a small intel DC drive (s35xx,s36xx,s37XX). for example.

How good that can be?

Check the attachment. It’s ssd Is PM953 can do any better?
 

Attachments

  • 6D2BD469-AE08-4124-9E47-77D0B9ECC7CA.jpeg
    6D2BD469-AE08-4124-9E47-77D0B9ECC7CA.jpeg
    177.1 KB · Views: 42
Now i'm using 1 x Intel P3700 800GB per node For DB Journal and really bad performance

root@ceph2:~# rados -p test bench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_ceph2_6687
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
48 8 127 119 9.91539 0 - 1.35358
49 8 127 119 9.71303 0 - 1.35358
50 8 127 119 9.51878 0 - 1.35358
Total time run: 50.773652
Total writes made: 127
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 10.0052
Stddev Bandwidth: 24.6703
Max bandwidth (MB/sec): 104
Min bandwidth (MB/sec): 0
Average IOPS: 2
Stddev IOPS: 6
Max IOPS: 26
Min IOPS: 0
Average Latency(s): 3.85386
Stddev Latency(s): 9.88605
Max latency(s): 41.3127
Min latency(s): 0.0662239

it backed up with 2 x Dual 10 Gbps port Intel RJ45 as broadcast mode and i test Balance_tlb but same result any idea?

I tested the P3700 4k speed it around 250MB/s

so any idea what is the issue?
 
Now i'm using 1 x Intel P3700 800GB per node For DB Journal and really bad performance

root@ceph2:~# rados -p test bench 10 write --no-cleanup
Hi,
use a longer test time to take a look on the osd-nodes, where the bottlenek is (look with atop - must installed before)
Code:
rados bench -p test 60 write --no-cleanup
How looks "ceph osd perf" during write bench?

How fast is the network connection between the nodes?
Test with iperf in both directions!

Have you enabled jumbo frames?
If yes, does jumbo frames work?
Code:
ping -M do -s 8700 ip.ceph.osd.node

If you read the data again, it's test the network reading speed, because the data are cached in mem (execpt you have much to less ram).
Code:
rados bench -p test 60 seq
This thest show no hdd-speed!! only network!
If you want real data, you must flush the cache on all nodes!

Do you have an extra network for ceph-cluster (sync the OSDs)?

Udo
 
  • Like
Reactions: Tmanok
Hi
use a longer test time to take a look on the osd-nodes, where the bottlenek is (look with atop - must installed before) ,
Code:
rados bench -p test 60 write --no-cleanup
How looks "ceph osd perf" during write bench?


Here is

root@ceph3:~# ceph osd perf
osd commit_latency(ms) apply_latency(ms)
8 0 0
6 0 0
7 0 0
5 0 0
4 0 0
0 0 0
1 0 0
2 0 0
3 0 0


Code:
How fast is the network connection between the nodes?
Test with iperf in both directions!


Not sure why i got this

root@ceph3:~# iperf -c 10.10.1.3
connect failed: Connection refused
root@ceph3:~#

Code:
Have you enabled jumbo frames?
If yes, does jumbo frames work?
Code:
ping -M do -s 8700 ip.ceph.osd.node

I don't know what is so i can't said yes where i can enable it? i don't see that in ceph guid

root@ceph3:~# ping -M do -s 8700 10.10.1.2
PING 10.10.1.2 (10.10.1.2) 8700(8728) bytes of data.
ping: local error: Message too long, mtu=1500
ping: local error: Message too long, mtu=1500
ping: local error: Message too long, mtu=1500
ping: local error: Message too long, mtu=1500
ping: local error: Message too long, mtu=1500
^C
--- 10.10.1.2 ping statistics ---
5 packets transmitted, 0 received, +5 errors, 100% packet loss, time 4102ms

root@ceph3:~#

Code:
If you read the data again, it's test the network reading speed, because the data are cached in mem (execpt you have much to less ram).
[code]
rados bench -p test 60 seq
This thest show no hdd-speed!! only network!
If you want real data, you must flush the cache on all nodes![/code]

So i test the wrong way?

look that

root@ceph2:~# rados bench -p test 60 seq
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
Total time run: 0.964826
Total reads made: 127
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 526.519
Average IOPS: 131
Stddev IOPS: 0
Max IOPS: 0
Min IOPS: 2147483647
Average Latency(s): 0.119422
Max latency(s): 0.580055
Min latency(s): 0.0233491
root@ceph2:~#

Code:
Do you have an extra network for ceph-cluster (sync the OSDs)?


I have Dual Mellanox Conect x3 56Gb/s however unable to set setup privte network there is no ping between servers with the ib0 nor ib1
 
  • Like
Reactions: shrdlicka
Code:
rados bench -p test 60 write --no-cleanup
How looks "ceph osd perf" during write bench?


Here is

root@ceph3:~# ceph osd perf
osd commit_latency(ms) apply_latency(ms)
8 0 0
6 0 0
7 0 0
5 0 0
4 0 0
0 0 0
1 0 0
2 0 0
3 0 0
Hi,
but this isn't during the 60 second write test, or?
Not sure why i got this

root@ceph3:~# iperf -c 10.10.1.3
connect failed: Connection refused
you must first start on 10.10.1.3 an "iperf -s"; after that you can test on the other node with "iperf -c 10.10.1.3"
Have you enabled jumbo frames?
...
I don't know what is so i can't said yes where i can enable it? i don't see that in ceph guid
if you have an mtu-value defined on the network, wich is used for ceph (mtu value higher than 1500). Jumbo frames normaly 9000.
If you read the data again, it's test the network reading speed, because the data are cached in mem (execpt you have much to less ram).
Code:
rados bench -p test 60 seq
This thest show no hdd-speed!! only network!
If you want real data, you must flush the cache on all nodes!

So i test the wrong way?
No, the write test is real writing, but during writes, the content are buffered by linux. If you then read the data again, linux use the buffer and don't read the data again from the osds.
Bandwidth (MB/sec): 526.519
526MB/s (if buffered) for an 10GB-Network are not realy fast... it's ok, but could be better
Code:
Do you have an extra network for ceph-cluster (sync the OSDs)?
I have Dual Mellanox Conect x3 56Gb/s however unable to set setup privte network there is no ping between servers with the ib0 nor ib1
How looks your ceph.conf and your /etc/network/interfaces?

Udo
 
Hi,
but this isn't during the 60 second write test, or?

you must first start on 10.10.1.3 an "iperf -s"; after that you can test on the other node with "iperf -c 10.10.1.3"

Here is the right test

root@ceph4:~# iperf -c 10.10.1.2
------------------------------------------------------------
Client connecting to 10.10.1.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.10.1.4 port 45112 connected with 10.10.1.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 10.9 GBytes 9.40 Gbits/sec
root@ceph4:~#

if you have an mtu-value defined on the network, wich is used for ceph (mtu value higher than 1500). Jumbo frames normaly 9000.

No, the write test is real writing, but during writes, the content are buffered by linux. If you then read the data again, linux use the buffer and don't read the data again from the osds.

No i don't just one switch 10Gbps Juniper with normal setup with bonding brodcast mod

526MB/s (if buffered) for an 10GB-Network are not realy fast... it's ok, but could be better
How looks your ceph.conf and your /etc/network/interfaces?


Here is

Code:
[global]
     auth client required = cephx
     auth cluster required = cephx
     auth service required = cephx
     cluster network = 10.10.1.0/24
     fsid = 169d9a8e-1084-4f3e-8e97-5f230a208ef4
     keyring = /etc/pve/priv/$cluster.$name.keyring
     mon allow pool delete = true
     osd journal size = 5120
     osd pool default min size = 2
     osd pool default size = 3
     public network = 10.10.1.0/24

[osd]
     keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.ceph2]
     host = ceph2
     mon addr = 10.10.1.2:6789

[mon.ceph4]
     host = ceph4
     mon addr = 10.10.1.4:6789

[mon.ceph3]
     host = ceph3
     mon addr = 10.10.1.3:6789


here is interfaces
Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
    dns-nameservers 8.8.8.8
# dns-* options are implemented by the resolvconf package, if installed

auto eth1
iface eth1 inet static
    address  10.10.10.16
    netmask  255.255.255.0

iface eth2 inet manual

iface eth3 inet manual

auto eth4
iface eth4 inet static
    address  10.10.2.2
    netmask  255.255.255.0

auto eth5
iface eth5 inet manual

auto ib0
iface ib0 inet static
    address  10.10.3.2
    netmask  255.255.255.0
    pre-up modprobe ib_ipoib
    pre-up echo connected > /sys/class/net/ib0/mode
    mtu 65520

auto ib1
iface ib1 inet manual

iface eth6 inet manual

iface eth7 inet manual

auto bond0
iface bond0 inet static
    address  10.10.1.2
    netmask  255.255.255.0
    slaves eth4 eth5
    bond_miimon 100
    bond_mode broadcast

auto vmbr0
iface vmbr0 inet static
    address  xxxx
    netmask  255.255.255.0
    gateway  xxxxx
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0

iface vmbr0 inet6 static
    address  xxxxx
    netmask  30
    gateway  xxxx

As i mentioned before i can't get ib Mellanox up between nodes.

Hi again,
btw. how fast are your nodes? What cpu/ram config do you use?

Udo

Dual E5-2660 and 2650 with 75GB RAM
 
Here is the right test

root@ceph4:~# iperf -c 10.10.1.2
------------------------------------------------------------
Client connecting to 10.10.1.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.10.1.4 port 45112 connected with 10.10.1.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 10.9 GBytes 9.40 Gbits/sec
ok, in both directions?
here is interfaces
Code:
auto lo
...
auto eth4
iface eth4 inet static
    address  10.10.2.2
    netmask  255.255.255.0

auto eth5
iface eth5 inet manual

auto bond0
iface bond0 inet static
    address  10.10.1.2
    netmask  255.255.255.0
    slaves eth4 eth5
    bond_miimon 100
    bond_mode broadcast
...
Strange, you use eth4 with an different subnet and then as bond port for the ceph-network??
On which vlan is eth4 + eth5 untagged on the switch?
Sounds not right for me.

I would first use one nic without bonding. And if all work, than enabling bonding.
Dual E5-2660 and 2650 with 75GB RAM
This should not part of the problem.

Udo
 
  • Like
Reactions: Tmanok
ok, in both directions?

Yes in both

trange, you use eth4 with an different subnet and then as bond port for the ceph-network??
On which vlan is eth4 + eth5 untagged on the switch?
Sounds not right for me.
I would first use one nic without bonding. And if all work, than enabling bonding.

This should not part of the problem.

Udo

i was testing something if it will works i will remove the subnet from eth4 and eth5 leave they are on with empty

I will test with one port and see what is going.
 
ok, in both directions?

Strange, you use eth4 with an different subnet and then as bond port for the ceph-network??
On which vlan is eth4 + eth5 untagged on the switch?
Sounds not right for me.

I would first use one nic without bonding. And if all work, than enabling bonding.

This should not part of the problem.

Udo

I would use different subnet in eth4 or eth5 bot even both when i using bonding otherwise the network will not fully up the boding ips not ping between nodes. now i use one nic port without boding and here is the test.

root@ceph4:~# rados bench -p test 60 seq
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
Total time run: 0.268494
Total reads made: 53
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 789.589
Average IOPS: 197
Stddev IOPS: 0
Max IOPS: 0
Min IOPS: 2147483647
Average Latency(s): 0.0752336
Max latency(s): 0.266691
Min latency(s): 0.0114345
root@ceph4:~#
 
I would use different subnet in eth4 or eth5 bot even both when i using bonding otherwise the network will not fully up the boding ips not ping between nodes. now i use one nic port without boding and here is the test.

root@ceph4:~# rados bench -p test 60 seq
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
Total time run: 0.268494
Total reads made: 53
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 789.589
Average IOPS: 197
Stddev IOPS: 0
Max IOPS: 0
Min IOPS: 2147483647
Average Latency(s): 0.0752336
Max latency(s): 0.266691
Min latency(s): 0.0114345
root@ceph4:~#
Ok,
and how looks your write test now?

Udo
 
  • Like
Reactions: mada
Ok,
and how looks your write test now?

Udo

I was able to set the Mellanox Dual port 54Gb/s port FDR card up however able to do without bonding

Code:
root@c18:~# rados -p test bench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_c18_6964
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        58        42   168.005       168    0.135038    0.303561
    2      16        82        66   131.989        96    0.103896    0.233506
    3      16        82        66   87.9914         0           -    0.233506
    4      16        82        66   65.9932         0           -    0.233506
    5      16        82        66   52.7943         0           -    0.233506
    6      16        82        66    43.995         0           -    0.233506
    7      16       119       103   58.8503      29.6    0.095329    0.977303
    8      16       134       118   58.9929        60     0.06799    0.943244
    9      16       145       129   57.3264        44   0.0470129    0.866839
   10      16       145       129   51.5936         0           -    0.866839
   11      16       145       129   46.9033         0           -    0.866839
   12      13       146       133   44.3278   5.33333     5.19894    0.996659
Total time run:         12.618078
Total writes made:      146
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     46.2828
Stddev Bandwidth:       52.5279
Max bandwidth (MB/sec): 168
Min bandwidth (MB/sec): 0
Average IOPS:           11
Stddev IOPS:            13
Max IOPS:               42
Min IOPS:               0
Average Latency(s):     1.36968
Stddev Latency(s):      2.2308
Max latency(s):         6.42949
Min latency(s):         0.0400822
root@c18:~# rados bench -p test 60 seq
hints = 1
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
Total time run:       0.594207
Total reads made:     146
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   982.822
Average IOPS:         245
Stddev IOPS:          0
Max IOPS:             0
Min IOPS:             2147483647
Average Latency(s):   0.06347
Max latency(s):       0.247034
Min latency(s):       0.012478
root@c18:~#


i'm not quite sure why got this speed only the NIC supposed to be 54Gb/s and it plugged with FDR switch

the network setup
Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
    dns-nameservers 8.8.8.8
# dns-* options are implemented by the resolvconf package, if installed

auto eth1
iface eth1 inet manual

iface eth2 inet manual

auto eth3
iface eth3 inet static
    address  10.10.10.17
    netmask  255.255.255.0

auto eth4
iface eth4 inet manual

iface eth5 inet manual

auto ib0
iface ib0 inet static
    address  10.1.1.17
    netmask  255.255.255.0
        pre-up echo connected > /sys/class/net/ib0/mode
        mtu 65520

auto ib1
iface ib1 inet static
    address  10.1.2.17
    netmask  255.255.255.0
        pre-up echo connected > /sys/class/net/ib1/mode
        mtu 65520

auto vmbr0
iface vmbr0 inet static



test speed with
iperf
Code:
root@c18:~# iperf -c 10.1.1.17
------------------------------------------------------------
Client connecting to 10.1.1.17, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  3] local 10.1.1.18 port 52312 connected with 10.1.1.17 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  24.5 GBytes  21.0 Gbits/sec

is not should be at last 40Gb/s?

and here is the ceph

Code:
[global]
     auth client required = cephx
     auth cluster required = cephx
     auth service required = cephx
     cluster network = 10.1.1.0/24
     fsid = 4cb23fa8-fab0-41d9-b334-02fe8dede3a8
     keyring = /etc/pve/priv/$cluster.$name.keyring
     mon allow pool delete = true
     osd journal size = 5120
     osd pool default min size = 2
     osd pool default size = 3
     public network = 10.1.2.0/24

[osd]
     keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.c16]
     host = c16
     mon addr = 10.1.1.16:6789

[mon.c18]
     host = c18
     mon addr = 10.1.1.18:6789

[mon.c17]
     host = c17
     mon addr = 10.1.1.17:6789
 
anyone can give me advice on this? and what best configuration i can go with?

Thanks
 
Hi,
use a longer test time to take a look on the osd-nodes, where the bottlenek is (look with atop - must installed before)
Code:
rados bench -p test 60 write --no-cleanup
How looks "ceph osd perf" during write bench?
Udo

Code:
osd commit_latency(ms) apply_latency(ms)
  8                 65                65
  7                 74                74
  6                 52                52
  3                  0                 0
  5                214               214
  0                 70                70
  1                 85                85
  2                 76                76
  4                196               196


the test result with rados bench -p test 60 write --no-cleanup

Code:
Total time run:         60.319902
Total writes made:      3802
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     252.122
Stddev Bandwidth:       19.7516
Max bandwidth (MB/sec): 284
Min bandwidth (MB/sec): 212
Average IOPS:           63
Stddev IOPS:            4
Max IOPS:               71
Min IOPS:               53
Average Latency(s):     0.253834
Stddev Latency(s):      0.131711
Max latency(s):         1.10938
Min latency(s):         0.0352605

rados bench -p rbd -t 16 60 seq

Code:
Total time run:       14.515122
Total reads made:     3802
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   1047.73
Average IOPS:         261
Stddev IOPS:          10
Max IOPS:             277
Min IOPS:             239
Average Latency(s):   0.0603855
Max latency(s):       0.374936
Min latency(s):       0.0161585

rados bench -p rbd -t 16 60 rand

Code:
Total time run:       60.076015
Total reads made:     19447
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   1294.83
Average IOPS:         323
Stddev IOPS:          20
Max IOPS:             364
Min IOPS:             259
Average Latency(s):   0.0488371
Max latency(s):       0.468844
Min latency(s):       0.00179505


iperf -c

Code:
 iperf -c 10.1.1.17
------------------------------------------------------------
Client connecting to 10.1.1.17, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  3] local 10.1.1.16 port 54442 connected with 10.1.1.17 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  24.5 GBytes  21.1 Gbits/sec
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!