network switch type for network bonding

RobFantini

Famous Member
May 24, 2012
2,015
102
133
Boston,Mass
for pve cluster I want to set up a network bond using four or more Gigabit Intel server nics at each node.

My question is - what is the best type of switch, manged or unmanaged?
 
If you want bonding you will need a switch that supports the feature which usually means a managed switch. I don't know if any switches that support it that are not managed.

#stayparanoid
 
Hi Rob,

Possible solutions depend on what you want to achieve. Some ideas on this:

1) host name load balancing: if you want to make your (web) service highly available or increase its nwk througput, then you can give each of your NICS its own IP address (not even necessarily in the same subnet) and have your network translate your hostname or URLto any of the IPs, f.e. in a round robin fashion. DNS load balancing is not sophisticated, though, for it cannot check the availability of your NIC. To improve this kind of solution you would need a load balancer appliance in your network. hostname or URL based load balancing I would refer to as L3 (OSI layer 3) load balancing.
 
The above only works for incoming sessions. If the server/host needs to use multiple NICS for initiation of sessions from its own side then maybe NAT overload would be needed (a technique that allows multiple NICS (not necessarily on the same server) share a single IP address for outgoing connections.

Bottom line here is that L3 load balancing does not involve the use of bonding but on the avaiability of L3 network devices or appliances that can do things like described before (like load balancing appliances or routers). If this is not what you're looking for please see my next reply.
 
2) Bonding I would consider part of a L2 (OSI layer 2) solution. Such solution allows a single (virtual) MAC address to be shared between multiple NICs.
If bonding were to be used for availability then I agree w Dietmar that using 4 NICs is overkill. But to get more bandwidth 4 NICs could be viable, provided that there is no other bandwidth bottleneck in the network between the clients and servers.

Note that in case of bonding for availability reasons I see no need for intelligent (managed) switches. F.e. Winows SLB (server load balancing) AFAIK uses teaming of NICS w/o the need for any configuration on the switch side. It will simply present the vMAC (address) on one of the switch ports as long as the link to it is up, not using the 2nd link. But you could verify this, because I recall this from the top of my head. Note also thet both NICs need not necessarily be connected to the same switch (for avail.b. reasons even better not).
 
Bonding continued ...

When using bonding for bandwidth reasons I guess Pirateghost is right, for you might want (or need) to set up channel interfaces (multiple NIC/link interfaces that are controlled by special link protocoles like LACP). For the latter you need intelligent (managed) switches. The server will now be able to present its vMAC on all channel members simultaniously w the help of the switch w/o the risc of creating a network loop.
Unfortunately I can't give you any does and don'ts right away for I did not yet try this at home. I sure will do so some time for I have managed switches and servers w 3 NICs (one on the mobo for management and two in PCI slots for data and VMs).

Note that w channeling your server NICs might still be hooked up two different switches for even better performance and availability. But this makes your switch setup more complicated because your switches must support channeling over multiple switches. So if you want this also look for these additional features when selecting your switch(es).

That'll be all for now.

Good luck,

Steijn
 
Bonding continued ...

When using bonding for bandwidth reasons I guess Pirateghost is right, for you might want (or need) to set up channel interfaces (multiple NIC/link interfaces that are controlled by special link protocoles like LACP). For the latter you need intelligent (managed) switches. The server will now be able to present its vMAC on all channel members simultaniously w the help of the switch w/o the risc of creating a network loop.
Unfortunately I can't give you any does and don'ts right away for I did not yet try this at home. I sure will do so some time for I have managed switches and servers w 3 NICs (one on the mobo for management and two in PCI slots for data and VMs).

Note that w channeling your server NICs might still be hooked up two different switches for even better performance and availability. But this makes your switch setup more complicated because your switches must support channeling over multiple switches. So if you want this also look for these additional features when selecting your switch(es).

That'll be all for now.

Good luck,

Steijn
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!