Hello
I have 3x FullHD GIGE cameras, and app that must be run on separate VM machine.
Thus 3x cameras and 3x VMs. All of them have separate IP address in the same VLAN.
Every camera produces feed of 400 Mb/s but I would like to use 16 bit wide data, thus every camera will use 800 Mb/s.
I have following network configuration:
Is there a conflict between Linux Bonding and HPE 1820 where I must state load balance algo? No matter which I use and now matter if the trunk is static (Linux has then balance-rr set) or dynamic (LACP) situation does not change.
Is bonding a bad idea here?
To utilise 3x 800Mbps I need additional adapter to have 3 or 4Gbps trunk.
But maybe while adding another adapter to Proxmox box, I shall setup separated Ethernet interfaces to each VMs instead?
I have 3x FullHD GIGE cameras, and app that must be run on separate VM machine.
Thus 3x cameras and 3x VMs. All of them have separate IP address in the same VLAN.
Every camera produces feed of 400 Mb/s but I would like to use 16 bit wide data, thus every camera will use 800 Mb/s.
I have following network configuration:
- 3 x POE GigeVision cameras connected to managed PoE switch (switch A).
- Proxmox servers are connected to switch B (HPE 1820)
- Between switch A & B there is 4 Gb trunk (4 x gigabit copper links)
- One proxmox server hosts 3x VM that shall receive feeds from cameras
- This Proxmox VE server has 3x Eth ports, two of them bonded, also exposes bridge for VMs.
Is there a conflict between Linux Bonding and HPE 1820 where I must state load balance algo? No matter which I use and now matter if the trunk is static (Linux has then balance-rr set) or dynamic (LACP) situation does not change.
Is bonding a bad idea here?
To utilise 3x 800Mbps I need additional adapter to have 3 or 4Gbps trunk.
But maybe while adding another adapter to Proxmox box, I shall setup separated Ethernet interfaces to each VMs instead?