I have a successful ipoib here. By partition do you mean P_keys?
You have read this? https://www.kernel.org/doc/Documentation/infiniband/ipoib.txt
And this with a howto: http://www.rdmamojo.com/2015/04/21/working-with-ipoib/#Partitioning_in_IPoIB_VLAN_equivalent
Partition configuration is stored in:
/etc/opensm/partitions.conf
If the file does not exist create it and add the following lines:
Default=0xffff, ipoib : ALL=full;
vlan1=0x8001, ipoib : ALL=full; #ALL can also be limit
vlan2=0x8002, ipoib : ALL=full; #ALL can also be limit
Restart opensm : # service opensm restart
Check Partitions keys : # smpquery pkeys 4
To get the node ID where the OpenSM is running : # sminfo
Add child Interface for IB: # echo 1 > /sys/class/net/ib0/create_child
Check the interface is created : # ifconfig ib0.8001
Configure the inteface from : #nano /etc/network/interfaces
Start the IB interface : #ifup ib0.8001
Anybody got idea if creating bridge with IB ports are out of question or not?
What would be the best way to connect the IB to a VM?
To address this what we did is setup couple of proxmox nodes in the same cluster with zfs and gluster on top and attach the gluster cluster to proxmox storage.
There is no tricks in this really. Simply followed Proxmox wiki to setup a Proxmox node with ZFS and Gluster. Below are simplified steps:Could you please details on how to setup promox nodes in the same cluster with zfs and gluster on top and then attach the cluster to proxmox storage?
You can setup either way, with separate switch on separate IB interface cards or coexists with existing IB network for Ceph if you have any. If you are using 10+Gbps IB, the backup will not consume entire bandwidth since there is a limiting factor of HDD write speed. We have both setup and performance difference is not that noticeable. If you are not sharing existing IB switch, then you have to drop in extra IB cards in all the nodes in the cluster which will be connected through separate switch.1) I am assuming these two nodes are on separate Infiniband switch. Correct?
The nodes does not need to be on their own cluster at all. Also do not need to install ceph on them since their only duty is to server Gluster on ZFS for backup storage. The nodes can be part of existing Proxmox cluster so you can monitor them through same GUI. In that case, the nodes will have one interface(Gigabit) for Proxmox cluster communications while the IB interface is used for backup.2) Are these two nodes on its own ProxMox Cluster? Do you install ceph on them?
By installing separate IB cards in each nodes. Let's assume in your existing environment each node has One Gigabit NIC used for Proxmox cluster communication and One IB NIC used for Ceph network. For your backup network you are going to drop in another IB NIC in each node and configure them with a new subnet. You are basically creating a new network for backups.3) Since these are on separate infiniband switch, how do you get this cluster to communicate with the other Ceph cluster or Proxmox cluster to back up the VMs?