Anyone using multipath iSCSI over gigabit ports to increase throughput?Or balance-rr?

Miktash

Active Member
Mar 6, 2015
67
1
28
I have 10 proxmox VM hosts and a shared storage.
Each proxmox host has 2x gigabit ports.
The shared storage has 6x gigabit ports.


I currently have each node set to do LACP with the switch using 2 ports. The same goes for the storage, with 6 ports.
Everything works fine but, offcourse, my VM's disk performance is limited to 1 gigabit. Since bonding, LACP, won't strip traffic across multiple ports.

Now I want higher throughput inside my VM's (disk performance) and I'm looking at the options that I have.

The best/easiest option is upgrading to 10G. But that's not possible right now (too expensive).
The other options I can think of are:

1. Using the balance-rr mode of the bonding driver. It'll stripe traffic across multiple interfaces. I could setup vlans and have traffic striped across it, should work according to this document: http://louwrentius.com/achieving-450-mbs-network-file-transfers-using-linux-bonding.html

or

2. Switch from NFS to iSCSI, more specifically iSCSI multipath.

I'm not a fan of iSCSI because I had bad performance tests a while ago, when deciding between NFS or iSCSI. At that time I had considerably better performance over NFS when there was random i/o in the VM. Also throughput was better.. But maybe I did something wrong ? ...


How do you guys increase throughput when using shared storage?
Or are you guys all rich and buy 10g stuff? :)