We are conducting a proof of concept using Proxmox VE 9.1, like a lot of people here, to replace VMware ESXi + vCenter.
We can't seem to find a definitive answer to our iSCSI networking questions. For our POC we will be using our Nimble Storage array that has redundant controllers connected to two seperate switches. The iSCSI traffic is in an un-routed VLAN. We have two hosts with two physical NICs that we can dedicate to iSCSI traffic, with seperate NICs for management, VM traffic, etc. We want to mimic our VMware setup as much as possible but we also recognize that VMware and Proxmox are different. We want the connections to be highly available and fault tolerant. How should we configure the NICs?
Should we:
1. Set an IP address on each individual NIC in the iSCSI VLAN and use multipath?
2. Add each NIC to a bond and use multipath?
3. Add each NIC to a bond and then a bridge and use multipath?
4. Any combination of the above with multipath?
5. Do any combination of the above without multipath?
We can't seem to find a definitive answer to our iSCSI networking questions. For our POC we will be using our Nimble Storage array that has redundant controllers connected to two seperate switches. The iSCSI traffic is in an un-routed VLAN. We have two hosts with two physical NICs that we can dedicate to iSCSI traffic, with seperate NICs for management, VM traffic, etc. We want to mimic our VMware setup as much as possible but we also recognize that VMware and Proxmox are different. We want the connections to be highly available and fault tolerant. How should we configure the NICs?
Should we:
1. Set an IP address on each individual NIC in the iSCSI VLAN and use multipath?
2. Add each NIC to a bond and use multipath?
3. Add each NIC to a bond and then a bridge and use multipath?
4. Any combination of the above with multipath?
5. Do any combination of the above without multipath?