So in any configuration with storage subsystems providing more than 300MB/s, 10GbEth link is _mandatory_ for DRBD if you don't want the network to be the bottleneck !?
Now I'm thinking of distributing my DRBD links on several [2|3]xGb bonds ? If no single LAG will be able to par the arrays throughput, at least global network throughput should be maxed... hmm, but benefits really depend on my use case... sorry, thinking out loud !
Bests
Yes, i use two node drbd nodes in active/active mode and 2x 10GBit in bonding mode 1 (active/backup) and the 1 GBit interfaces for management and bridges (also bonding mode 1);
If you wonder why not bonding mode 0 (balance-rr) - because i want to make sure highest availability at minimum risk;
In bonding mode 0 you need to (you should) connect the links to separate switches, if the uplink of one of your switches is not working you have an server outage;
Explanation:
Bonding mode 0 puts the packets like round robin on the available wires, if the uplink of one of your switches fail's, the link of the nic connected to this switch stays up and the bonding driver continues putting packets on that wire;
The result is on two nic's 50% packet loss, with three nic's 33% packet loss, and so on... because there is no control mechanism;
And this amount of packet loss makes all your hosted services unavailable;
And for more drbd throughput performance you can offload the bitmap reads/writes to a separate device - a small ssd would be a good choice that;