The above solution did not work out for 3 of my servers, all of them Dell hardware. What actually did work is below:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
bond-master bond0
auto eth1
iface eth1 inet manual
bond-master bond0
auto bond0
iface bond0 inet...
I run Iometer (this test case: http://vmblog.pl/OpenPerformanceTest32-4k-Random.icf ) on Windows VM, but the results were mixed. Usually I get lower latency and load on the storage server. I use Mellanox Technologies MT25208 InfiniHost III Ex on storage servers and Mellanox Technologies MT25418...
My results are mixed. Load on the storage server is lower and latency is usually better with RDMA, but beside that it's hard to tell whether you gain or lose with RDMA. Running iometer on a Windows VM I get the following results (14 SATA disks with 2 RAID1 Intel DC3700 100GB version as...
I have been successfully using NFS RDMA with nfs vers=4. The trick is to load xprtrdma on the client and put sunrpc.rdma_memreg_strategy=6 in /etc/sysctl.conf. I have done this on a standard installation, no external ofed repositories were added.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.