Gige cameras and bonding

Arkadiusz Raj

New Member
Mar 18, 2016
4
0
1
52
Hello

I have 3x FullHD GIGE cameras, and app that must be run on separate VM machine.
Thus 3x cameras and 3x VMs. All of them have separate IP address in the same VLAN.

Every camera produces feed of 400 Mb/s but I would like to use 16 bit wide data, thus every camera will use 800 Mb/s.

I have following network configuration:
  • 3 x POE GigeVision cameras connected to managed PoE switch (switch A).
  • Proxmox servers are connected to switch B (HPE 1820)
  • Between switch A & B there is 4 Gb trunk (4 x gigabit copper links)
  • One proxmox server hosts 3x VM that shall receive feeds from cameras
  • This Proxmox VE server has 3x Eth ports, two of them bonded, also exposes bridge for VMs.
I would like to use this trunk between switch and server to pass all communication between camera & vm pairs. But the thing is that packets are mixed, and I am only able to start 2 cameras (400 Mbs each) but not three of them.

Is there a conflict between Linux Bonding and HPE 1820 where I must state load balance algo? No matter which I use and now matter if the trunk is static (Linux has then balance-rr set) or dynamic (LACP) situation does not change.

Is bonding a bad idea here?

To utilise 3x 800Mbps I need additional adapter to have 3 or 4Gbps trunk.
But maybe while adding another adapter to Proxmox box, I shall setup separated Ethernet interfaces to each VMs instead?
 
1] check all trunks (A-B, B-proxmox) are connected in LACP (no balance-rr, no master-slave etc) mode
2] use IP-IP or better IP+Port-IP+Port as balancing alghoritm
3] for redundancy is better LACP than separated eth interface to VMs

2x400Mbps looks as some interconnect probably works only in 1Gbps mode with spare links (aka master-slave)
 
In previous setup:
1] yes
2] yes
..problem faced.

Current config:
1] B-Proxmox static bond, no LACP, Linux set to balance-rr, HP set to load balance by "src/dst ip and tcp/udp port fields"

At very beginning when whole system starts I am noticing problems in communication, but after some time (few minutes) communication stabilizes and works fine till next restart of proxmox or switch.

2x400Mbps to me was a problem in collaboration between Linux bond and HP settings.
Other switch I have is TP-Link 2600 series. It has no separate hasing settings for trunks but single global one (src mac, dst mac, src+dst mac, src ip, dst ip, src+dst ip) which seems to be poorer in comparison to HP one.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!