isci multipath vs bonding 802.3ad

benoitc

Member
Dec 21, 2019
173
9
23
Is there any advantage in using multiparty vs 802.3ad bonding ? If I understand multiparty is better for the cases where we are using different switches and probably different network paths but in a local network is there any interest for it?
 
Hi,

if you ask me it is always better if the application is able to do the load balance.
But Networkbonding is simpler to setup.
 
  • Like
Reactions: benoitc
LACP will only use a single link for one tcp connection, no matter how many links you bond together. With lots a connections the load will be spread. But if you have a single iSCSI lun (with LVM on top), then there's a single connection from pve to your san, so it will only ne able to use the capacity of one NIC
 
  • Like
Reactions: benoitc
I used to have the same question for quite a long time and always assumed it came down to personal preference and the system environment (OS, hardware,etc).

After doing some research, as well as confirming with the official support of one of the most poplar virtualization platforms on the market, bonding/teaming is now basically out of the picture for me personally whenever it comes to dealing with iSCSI. I understand one may argue it really depends on different OSes, hardware (e.g. SANs), etc, but as far as I know, multipath/MPIO is the native way in iSCSI protocol for handling redundancy (and may boost performance). Generally speaking, iSCSI expects MPIO as the native mechanism for redundancy, but bonding/teaming is done at the network level that is outside of iSCSI protocol's control and therefore may get in the way of how iSCSI handles failover.

The hardware SANs I've come across (like Dell Equallogic, etc) don't support NIC teaming. And interestingly, there seems to be no way one can do NIC teaming on the this type of hardware SANs' side either if this were the "preferred" method. People who ask about NIC teaming in relation to iSCSI tend to be in the context of configuring iSCSI on the client-side (unless it's about setting up iSCSI target on Linux/Windows, etc).

I can fully understand configuring MPIO may be a pain to deal with. But again, depending on the environment, I personally find configuring iSCSI is relatively more tedious to deal with on Linux as opposed to VMware (vCenter/ESXi) or Windows. I don't know if this is a chicken and egg question, but I feel like this is why there seems to a "norm" that Linux users tend to do NIC teaming over MPIO. It's just my personal hunch.

There's one interesting "exception" I can think of.

When it comes to hardware purchasing decisions, I'd normally highly recommend having separate/dedicated NICs for storage, especially for iSCSI.

On a side note, NFS might be a bit different as I recall older versions of NFS didn't support MPIO, so this is where NIC teaming comes into play.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!