Storage array: Omnios with zpools
Infiniband: Voltaire dual-port infiniband HCA PCIe 500Ex-D (DDR)
Switch: No. Direct connected
Subnet manager: opensm running on proxmox nodes (gives fail-over since second opensm will automatically start in passive mode)
Proxmox: 2 nodes 3.3 with Qdisk on separate shared storage
What is the best way to implement storage for proxmox over infiniband?
IBoIP: This should be straight forward and will involve using the ZFS plugin. Live migration, storage migration, cloning, and snapshots
SRP: LVM groups with network backing. Live migration and storage migration but no cloning and snapshots
iSER: LVM groups with network backing. Live migration and storage migration but no cloning and snapshots
IBoIP
pro:
SRP and iSER
pro:
Benchmarks available via google shows IBoIP in connected mode with mtu 64k can easily saturate infiniband DDR so I guess for DDR the performance issue is less of a deal breaker.
So summing it all up I should mean when storage array is ZFS the best option would be IBoIP since this gives the full featured proxmox storage model.
What do you think?
Infiniband: Voltaire dual-port infiniband HCA PCIe 500Ex-D (DDR)
Switch: No. Direct connected
Subnet manager: opensm running on proxmox nodes (gives fail-over since second opensm will automatically start in passive mode)
Proxmox: 2 nodes 3.3 with Qdisk on separate shared storage
What is the best way to implement storage for proxmox over infiniband?
IBoIP: This should be straight forward and will involve using the ZFS plugin. Live migration, storage migration, cloning, and snapshots
SRP: LVM groups with network backing. Live migration and storage migration but no cloning and snapshots
iSER: LVM groups with network backing. Live migration and storage migration but no cloning and snapshots
IBoIP
pro:
- Complete storage support
- Flexible
- Lower performance
- Less mature in proxmox but running stable here for 1.5 years
SRP and iSER
pro:
- Higher performance
- Mature in proxmox
- Lacks cloning and snapshot feature
- Less flexible
Benchmarks available via google shows IBoIP in connected mode with mtu 64k can easily saturate infiniband DDR so I guess for DDR the performance issue is less of a deal breaker.
So summing it all up I should mean when storage array is ZFS the best option would be IBoIP since this gives the full featured proxmox storage model.
What do you think?
Last edited: