Hello,
i'm testing the performance over two nodes connected by two Mellanox:
MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)
On Proxmox 6 last version i have installed all pakages:
apt-get install rdma-core libibverbs1 librdmacm1 libibmad5 libibumad3 librdmacm1 ibverbs-providers rdmacm-utils infiniband-diags libfabr
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Modules are loaded:
lsmod | grep '\(^ib\|^rdma\)'
ib_umad 28672 8
rdma_ucm 28672 0
ib_ipoib 110592 0
ib_iser 53248 0
rdma_cm 61440 3 rpcrdma,ib_iser,rdma_ucm
ib_cm 57344 2 rdma_cm,ib_ipoib
ib_uverbs 126976 2 mlx4_ib,rdma_ucm
ib_core 299008 10 rdma_cm,ib_ipoib,rpcrdma,mlx4_ib,iw_cm,ib_iser,ib_umad,rdma_ucm,ib_uverbs,ib_cm
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Than i have configured eth interfaces on both nodes:
auto ibp5s0
iface ibp5s0 inet static
address 10.0.0.2
netmask 255.255.255.0
ibp5s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 2044
inet 10.0.0.2 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::e61d:2d03:29:871 prefixlen 64 scopeid 0x20<link>
unspec 80-00-02-08-FE-80-00-00-00-00-00-00-00-00-00-00 txqueuelen 256 (UNSPEC)
RX packets 4802046 bytes 261215638 (249.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2743236 bytes 43086382286 (40.1 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Testing an SCP transfert over ram disk i have only 347.2MB/s.
On a SPF 10G i have 437MB/s...
Hav i to do some tuning?
Someone have better performance using Mellanox 40G ?
Thanks!
i'm testing the performance over two nodes connected by two Mellanox:
MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)
On Proxmox 6 last version i have installed all pakages:
apt-get install rdma-core libibverbs1 librdmacm1 libibmad5 libibumad3 librdmacm1 ibverbs-providers rdmacm-utils infiniband-diags libfabr
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Modules are loaded:
lsmod | grep '\(^ib\|^rdma\)'
ib_umad 28672 8
rdma_ucm 28672 0
ib_ipoib 110592 0
ib_iser 53248 0
rdma_cm 61440 3 rpcrdma,ib_iser,rdma_ucm
ib_cm 57344 2 rdma_cm,ib_ipoib
ib_uverbs 126976 2 mlx4_ib,rdma_ucm
ib_core 299008 10 rdma_cm,ib_ipoib,rpcrdma,mlx4_ib,iw_cm,ib_iser,ib_umad,rdma_ucm,ib_uverbs,ib_cm
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Than i have configured eth interfaces on both nodes:
auto ibp5s0
iface ibp5s0 inet static
address 10.0.0.2
netmask 255.255.255.0
ibp5s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 2044
inet 10.0.0.2 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::e61d:2d03:29:871 prefixlen 64 scopeid 0x20<link>
unspec 80-00-02-08-FE-80-00-00-00-00-00-00-00-00-00-00 txqueuelen 256 (UNSPEC)
RX packets 4802046 bytes 261215638 (249.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2743236 bytes 43086382286 (40.1 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Testing an SCP transfert over ram disk i have only 347.2MB/s.
On a SPF 10G i have 437MB/s...
Hav i to do some tuning?
Someone have better performance using Mellanox 40G ?
Thanks!