I have an HP Microserver at home with Proxmox 4.
For the past months it has been running from one NIC. A few days ago I remembered that it had a secondary NIC which would be nice to provide some extra network performance for this server.
My VM's are all bound to vmbr0. eth0 was added as bridge port and everything was working fine.
I then created a bond0 with eth0 + eth1 and no IP configuration. Bonding is being done using balance-alb which seemed to be the best approach from what I've gathered.
The vmbr0 was modified to bridge with bond0 instead of eth0 directly and has the same IP it had before the change. The "VLAN Aware" box has been ticked although I'm not sure what it is supposed to be doing and whether I need it, but at this time I'm not actively using VLAN's within my own network.
The server is connected to a ZyXEL GS1900-24E running the latest available firmware.
After the change, everything seems to work just fine. It feels a bit snappier (although that might also be due to the fact that the server was rebooted after ~280 days and received a fresh kernel etc ;-) ). But one of the LXC containers has intermittent issues in reaching my Philips Hue bridge API. If I test it with curl from this particular container I see its hanging for a bit during receiving of the response. Also, sometimes it takes ~15 seconds before it actually connects.
The Hue bridge is connected to the same switch and all ports belong to the same (default) VLAN.
I now switched to balance-tlb but this doesn't appear to have solved the problem. I am now testing with active-backup but that only provides failover and not loadbalancing. This doesn't seem to have the problem, since all traffic originates from one NIC.
I'm not sure whether I can use the other bonding methods (e.g. balance-rr, balance-xor). According to the Link Aggregationpage on Wikipedia only balance-tlb and balance-alb don't require "special network switch support".
My switch also supports LACP, but I'm not sure whether I need LACP or "Static mode" and whether that would have any positive effect on my problem. The description of balance-xor sounds pretty nice, which would mean that the traffic would keep on originating from the same NIC for layer2+3 or layer3+4 modes)
Does this described behavior make sense to anyone?
For the past months it has been running from one NIC. A few days ago I remembered that it had a secondary NIC which would be nice to provide some extra network performance for this server.
My VM's are all bound to vmbr0. eth0 was added as bridge port and everything was working fine.
I then created a bond0 with eth0 + eth1 and no IP configuration. Bonding is being done using balance-alb which seemed to be the best approach from what I've gathered.
The vmbr0 was modified to bridge with bond0 instead of eth0 directly and has the same IP it had before the change. The "VLAN Aware" box has been ticked although I'm not sure what it is supposed to be doing and whether I need it, but at this time I'm not actively using VLAN's within my own network.
The server is connected to a ZyXEL GS1900-24E running the latest available firmware.
After the change, everything seems to work just fine. It feels a bit snappier (although that might also be due to the fact that the server was rebooted after ~280 days and received a fresh kernel etc ;-) ). But one of the LXC containers has intermittent issues in reaching my Philips Hue bridge API. If I test it with curl from this particular container I see its hanging for a bit during receiving of the response. Also, sometimes it takes ~15 seconds before it actually connects.
The Hue bridge is connected to the same switch and all ports belong to the same (default) VLAN.
I now switched to balance-tlb but this doesn't appear to have solved the problem. I am now testing with active-backup but that only provides failover and not loadbalancing. This doesn't seem to have the problem, since all traffic originates from one NIC.
I'm not sure whether I can use the other bonding methods (e.g. balance-rr, balance-xor). According to the Link Aggregationpage on Wikipedia only balance-tlb and balance-alb don't require "special network switch support".
My switch also supports LACP, but I'm not sure whether I need LACP or "Static mode" and whether that would have any positive effect on my problem. The description of balance-xor sounds pretty nice, which would mean that the traffic would keep on originating from the same NIC for layer2+3 or layer3+4 modes)
Does this described behavior make sense to anyone?
Last edited: