Promiscuous bridge for LXC Container?

mattlach

Renowned Member
Mar 23, 2016
181
21
83
Boston, MA
Hey all,

I am trying to run ntopng in an Ubuntu 14.04LTS container on my Proxmosx host.

I set up my switch (Procurve 1810G-24) to mirror both RX and TX of the port connected to my router, to a separate port on the switch.

Then I connected a designated NIC (eth3) on my Proxmox box to that port.

Judging by the very unscientific method of looking at activity leds, it appears as if traffic is indeed being correctly mirrored to the desired port.

Next, I created a new bridge (vmbr3) and added eth3 to this bridge.

After this, I created a new LXC container, where eth0 is connected to the normal network (VMBr0), and eth1 - configured in promiscuous mode - is connected to vmbr3, the dedicated network bridge that only has the one physical interface.

I installed ntopng, which appears to be running properly, yet it is not receiving any of the mirrored packets.

I'm guessing there is something I need to do in order to allow eth3 and vmbr3 on the proxmox host to promiscuously forward everything received on eth3 to the LXC container, but I am not quite sure what that might be.

Can anyone lend me a hand?

Thanks,
Matt

My /etc/network/interfaces on the Proxmox host:

Code:
~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

iface eth4 inet manual

iface eth5 inet manual

auto bond0
iface bond0 inet manual
    slaves eth0 eth1 eth2
    bond_miimon 100
    bond_mode 802.3ad
    bond_xmit_hash_policy layer2
    bond-lacp-rate 1

auto vmbr0
iface vmbr0 inet static
    address  10.0.1.10
    netmask  255.255.255.0
    gateway  10.0.1.1
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0

auto vmbr1
iface vmbr1 inet static
    address  10.0.2.10
    netmask  255.255.255.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0

auto vmbr2
iface vmbr2 inet manual
    bridge_ports eth4 eth5
    bridge_stp off
    bridge_fd 0

auto vmbr3
iface vmbr3 inet manual
    bridge_ports eth3
    bridge_stp off
    bridge_fd 0

My /etc/network/interfaces on my LXC container:

Code:
$ cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
    address 10.0.1.2
    netmask 255.255.255.0
    gateway 10.0.1.1

auto eth1
iface eth1 inet manual
        up ifconfig eth1 promisc up
        down ifconfig eth1 promisc down
 
  • Like
Reactions: ilakech
I am exactly running into the same issue. I do however see ARP and broadcast traffic coming through, but it looks like the 'promisc' part is being dropped before forwarding it to the container.
Anyone?
 
I am exactly running into the same issue. I do however see ARP and broadcast traffic coming through, but it looks like the 'promisc' part is being dropped before forwarding it to the container.
Anyone?

Hi

realise this was over a year ago, but I have the same exact issues as you!

Seeing the ARP and broadcast traffic, but nothing else -did you manage to find a solution?
 
Hi all,

I'm attempting to do something similar with a Virtual Switch Appliance. I've created one PVE bridge per vSwitch ports and set the bridge-ageing to 0 like suggested here: https://forum.proxmox.com/threads/send-mirrored-traffic-into-guest-vm.48002/ but nothing goes through aside of ARP and Broadcasts.. weird. So in my case, the VM1 can get a DHCP address from the Switch Controlling host but nothing else goes through...

ASCII landscape:

Switch Controlling VM -- vNIC4 -- vmbrxx7
---------------------------------- vNIC5 -- vmbrxx8

Switch VM -------------------vNIC7 -- vmbrxx7
-----------------------------------vNIC8 -- vmbrxx8
... (8ports/8bridges) ...
-----------------------------------vNIC4 -- vmbrxx4

VM1 ----------------------------vNIC1 -- vmbrxx4

Thanks for any potential leads =) perhaps that's not possible with bridges?

Cheers,
m.

Quick update, all working fine. The switch VM didn't liked the VirtIO Ethernet drivers. With e1000 it all went fine.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!