Infiniband with proxmox?

udo

Distinguished Member
Apr 22, 2009
5,981
204
163
Ahrensburg; Germany
Hi,
i have for testing purposes (drbd) two infiniband cards (Mellanox MT25418) but i have problems to get this running.
Perhaps it's due to my ignorance to infiniband. So i will be happy if someone with infiniband-knowledge give me a hint.

I can load the kernel-module (modprobe mlx4_ib) and get the message-output:
Code:
mlx4_ib: Mellanox ConnectX InfiniBand driver v1.0 (April 4, 2008)
But no device appears (ib0?). I think i need further modules, but even with ib_core + ib_sa nothing happens - or i don't see it.

I'm also unable to install ibutils - is there a repostitory for lenny? On aptosid (ex. sidux) i can install ibutils...
Need i further sortware??

Any suggestion are highly wellcome!

Udo
 
Dear Udo,

did you get the infiniband setup running in the meantime? I'm trying the same, got a ib0 IPoIB device set up, but am right now struggeling to set up a second bridge (the first one is for Gigabit Ethernet) vmbr1 -> ib0.

Did anyone manage to set up a bridge over an infiniband network?
 
Hi,
i have made only an unsuccessful trial - i had the cards borrowed only for a short period.
But please post the right way if you have more success, because i wan't to try it again (if i had a little bit more time at work).

Udo
 
With 2.0 it is a piece of cake to get it working.
2.0 uses hosts file to determine the IP when setting up Proxmox cluster.
So you get IPoIB setup, edit /etc/hosts so your hostname uses IPoIB IP instead of vmbr0 and then setup cluster.

In 1.x cluster is tied to vmbr0 and you can not bridge IPoIB so it does not work.
Now I have not tried the following idea but it might work.
Create a file in /etc/modprobe.d that aliases your IPoIB interface as vmbr0
edit /etc/network/interfaces and chage vmbr0 to vmbr1

I suspect that this will allow proxmox cluster to work over infiniband, but you can not use vmbr0 in any vm this way since vmbr0 is not a bridge.

The advantage to running Proxmox cluster over infiniband is so live migrations go faster.
More bandwidth for copying memory and disk data.
Once you have it working you will be dissapointed.
Proxmox uses ssh for copying data, the encryption will be the bottleneck.
Changing the cipher in /use/share/perl5/PVE/QemuMigrate.pm from blowfish to arcfour will get you better performance.

If you want "OMG this is fast" performance you need a cpu with aes-ni (most newer Intel and amd bulldozer), set cipher to aes128-cbc and then figure out how to get an openssl lib that supports aes-ni.
With 2.0 I used openssl lib from ubuntu 11.10 to get aes-ni working, not the sort of thing I would want to do on a production system but it was simple and worked.
 
Last edited:
Thanks for your answer. e100, your hack to get proxmox + IB working somehow looks real curious but could work indeed.
To make the VMs communicate with the outside world you would have to let them speak via vmbr1 I think. Any idea to let the VMs communicate via IB?
One solution could be to set up Single Root I/O Virtualisation (SR-IOV) which makes the IB-Card behave like multiple (virtual) IB devices and then pass one virtual IB devices to one VM. Now we've got the problem of migrating the VMs because each VM is connected to a unique virtual IB device.
So, any further ideas?