So, complete Proxmox noob here. I have been reading about Proxmos, iSCSI, mulitpathing, etc. and I'm not clear on the benefits for my case.
I'm setting up 2 (for now, I will probably set these up with mirror/redundant pairs) a R620 and R720, both with dual 10G NICs. I'm also configuring a R720XD as a FreeNAS box for shared storage hosting which I've planned on using as iSCSI with ZFS since this storage can do all Proxmox storage (live migration, snapshots, dir, etc.). I have VLAN100 for the VMs and VLAN101 for the SAN. The FreeNAS will be setup with 7,200 RPM SATA spinning media.
For now, I've started configuring the proxmox nodes with the 10G NICs bonded and associated with the bridge, setup as LACP/802.3ap on VLAN100 on my UBNT switches, and 4 1G NIC ports bonded on the SAN VLAN (I had originally planned a Drobo with iSCSI and only 1G ports). After reading that I shouldn't put iSCSI on LACP, I'm not sure how to configure. I did read the section on multipathing, but I didn't understand it and it seemed fairly complicated.
Question, with 10G connections, would you expect me to see improved performance with multipathing? I kind of think not, so I'm tempted to set the nodes and the SAN with 1-10G interfaces on VLAN100 for the VMS and 1-10G nic on the SAN for storage and call it a day. Sound strategy?
I'm setting up 2 (for now, I will probably set these up with mirror/redundant pairs) a R620 and R720, both with dual 10G NICs. I'm also configuring a R720XD as a FreeNAS box for shared storage hosting which I've planned on using as iSCSI with ZFS since this storage can do all Proxmox storage (live migration, snapshots, dir, etc.). I have VLAN100 for the VMs and VLAN101 for the SAN. The FreeNAS will be setup with 7,200 RPM SATA spinning media.
For now, I've started configuring the proxmox nodes with the 10G NICs bonded and associated with the bridge, setup as LACP/802.3ap on VLAN100 on my UBNT switches, and 4 1G NIC ports bonded on the SAN VLAN (I had originally planned a Drobo with iSCSI and only 1G ports). After reading that I shouldn't put iSCSI on LACP, I'm not sure how to configure. I did read the section on multipathing, but I didn't understand it and it seemed fairly complicated.
Question, with 10G connections, would you expect me to see improved performance with multipathing? I kind of think not, so I'm tempted to set the nodes and the SAN with 1-10G interfaces on VLAN100 for the VMS and 1-10G nic on the SAN for storage and call it a day. Sound strategy?