Nic's bond for CEPH

driftux

Active Member
Mar 3, 2010
30
0
26
I want to bond NIC-A and NIC-B in order to have performance and fail-over. This bonded interface will be used for CEPH public and private networks.
So now the question, can I create CT or VM from Proxmox GUI and add a network card NIC-A for CT's and VM's for networking?
 
Doing VLAN's the performance of the network won't be affected?

I don't tell that I will have 2 switches. One cable from the NIC-A goes to Switch-A and the NIC-B goes to Switch-B. This is for loadbalancing and fail-over.
Do I understand correctly that I don't need to configure anything on both Swtiches? VLAN-A will go to Switch-A and VLAN-B goes to Switch-B. Am I right?
 
It's more the opposite. With a bond both frontend and backend network can use double bandwidth if necessary. And further networks (like VM/CT) can also be put on that bond but can of course be impacted if a rebalancing is running, for example.
Which bond mode did you plan to use? I'm no real expert in that field, only read through several articles regarding bonding when I set up my ceph cluster, but maybe I can give some input.
 
So you mean that I can do the bond on Proxmox and then I could set this bond also for VM or CT.
That is great. I thought that it's impossible.

Because my 2 switches ar not support stacking and my 1Gbps LAN cards not support 802.3ad, my best bet I think to use balance-alb (mode 6).
I will asure the loadbalancing and failower in case one network card on the server or one of the switches will be damaged.
But if you have another opinion please let me know. I never did LAN bonding.

Do you think in this scenario I should use VLAN configuration anyway? Or I can go without it?
 
The VLAN over bond way is recommended only if you don't have enough high speed NICs for frontend and backend network. If that is the case, then you definitely can put ceph network(s) and VM/CT networks in VLANs on a bond, I'm using the same here.
I don't think that putting ceph traffic on the same net as VM/CT is a good idea, since everyone can eavesdrop on the intra cluster traffic. I would at least separate them via VLANs (if your hardware supports it, of course).
 
Let's say I bond two NICs and add on that bond one VLAN for public network and other VLAN for private network. As I understand that in case of disk failure, the restore process will start and all my private network bandwidth (1GB+1GB bond) will be used. Public network will suffer. So how does the VLAN scenario is different from the situation without VLAN? As I understand without VLAN, in the same restore situation all bandwith will be used in the same way, just through a single network. So in both case I will need to wait while the copy procedures will end and all cluster will re-balance.
Where is the point to use VLANs? Sorry, if my questions are silly.
 
Indeed, if your network is saturated, then there is no benefit. That's why I asked which bonding mode you were going to use. If you 802.3ad them together, you can end up with a higher bandwidth. Not sure if it makes sense with alb, though. As I said, I'm not an expert in this field. :)
With VLAN you could at least employ some QoS rules to give higher priority to the backend, for example.
But are you sure with Ceph over 1GbE? Do you have experience with such a setup?
 
If I can use rules for private network it makes sense. Thanks I will look for the information on QuS! My network cards don't support 802.3ad.
As I read alb mode loadbalance a single stream, but CEPH makes more requests with a lot of streams so it useful I think.
All my servers will have a single 2TB OSD on HDD drive. Servers will be used for email service.
So as I understood 1GB would be enough, but in case of OSD failure it would be about 6 hours of restore. But as I understand such failures won't be much often.
Everything I write here is only according to the information I read on internet. I have never setup cluster by myself though :(
So any comments are very valuable.
 
I also setup my first Ceph cluster a few weeks ago, so I'm also just referring to stuff that I read.
Just give it a try, I'm sure it will work.
 
So you have much more experience then I do :)
Do I understand correctly that if I add to bond 2 LAN adapters doing VLAN or without VLAN, in case of my one nic or switch failure the system will work?
 
That is the rationale behinde bonding, yes.

Make sure that the debian package ifenslave is installed. It is per default but gets uninstalled if you install ifupdown2, for example.
 
Make sure that the debian package ifenslave is installed. It is per default but gets uninstalled if you install ifupdown2, for example.
I have ifupdown2 installed.

So I take it, ifenslave is gone.

If I now reinstall ifenslave, will that affect ifupdown2, or can they coexist?

Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!