Linux Bridge - VLAN & Multicast

_Dejan_

Member
Aug 5, 2022
26
1
8
Slovenia
Hi,
After migration from QNAP to proxmox one of my VM which is TVHeadend server for IPTV do not get IPTV streams anymore...
I have standard "Linux Bridge" which is "VLAN aware" and all VM with different VLANS or VLAN trunks work normaly except that one which use two network cards.
First network card ens3 is fro private network and second one ens4 is for IPTV.
If I run ip address command on old VM(On QNAP) and on new VM(On Proxmox) both return same output, so there is no issues with VM configuration...
Switch configuration is ok and exact as on QNAP lan port. Network card(Mellanox CX354) used on Proxmox has been before used on QNAP and is work normaly...

Does Proxmox on bridge filter multicast? Any idea what to check and why IPTV multicast don't work?

Proxmox VM:

1661265865460.png

Code:
:~$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:aa:07:b5 brd ff:ff:ff:ff:ff:ff
    inet 10.60.5.40/24 brd 10.60.5.255 scope global dynamic ens3
       valid_lft 1678sec preferred_lft 1678sec
    inet6 fe80::5054:ff:feaa:7b5/64 scope link
       valid_lft forever preferred_lft forever
3: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:bd:09:98 brd ff:ff:ff:ff:ff:ff
    inet 10.64.118.122/16 brd 10.64.255.255 scope global ens4
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:febd:998/64 scope link
       valid_lft forever preferred_lft forever

QNAP VM:

1661266654758.png

Code:
:~$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:aa:07:b5 brd ff:ff:ff:ff:ff:ff
    inet 10.60.5.40/24 brd 10.60.5.255 scope global dynamic ens3
       valid_lft 3573sec preferred_lft 3573sec
    inet6 fe80::5054:ff:feaa:7b5/64 scope link
       valid_lft forever preferred_lft forever
3: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:bd:09:98 brd ff:ff:ff:ff:ff:ff
    inet 10.64.118.122/16 brd 10.64.255.255 scope global ens4
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:febd:998/64 scope link
       valid_lft forever preferred_lft forever
 
Im also try run command:
Code:
echo 0 > /sys/class/net/vmbrX/bridge/multicast_snooping
and
Code:
echo 0 > /sys/class/net/vmbrX/bridge/multicast_router

But it didn't help. Do I need restart something to get this command to work?

Im try capture traffic on host when I on VM start iptv channel(And it stuck at buffering):
Code:
:~# tcpdump -i vmbr1 -v host 10.64.118.122
tcpdump: listening on vmbr1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
20:38:55.592177 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_ex, 0 source(s)]
20:38:56.350032 IP (tos 0x0, ttl 64, id 19748, offset 0, flags [DF], proto UDP (17), length 48)
    10.64.118.122.46793 > 10.64.255.255.65001: UDP, length 20
20:38:56.420104 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_ex, 0 source(s)]
20:38:56.560129 IP (tos 0x0, ttl 64, id 19792, offset 0, flags [DF], proto UDP (17), length 48)
    10.64.118.122.46793 > 10.64.255.255.65001: UDP, length 20
20:39:11.596337 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_in, 0 source(s)]
20:39:11.760803 IP (tos 0x0, ttl 64, id 22512, offset 0, flags [DF], proto UDP (17), length 48)
    10.64.118.122.56372 > 10.64.255.255.65001: UDP, length 20
20:39:11.970978 IP (tos 0x0, ttl 64, id 22546, offset 0, flags [DF], proto UDP (17), length 48)
    10.64.118.122.56372 > 10.64.255.255.65001: UDP, length 20
20:39:12.132245 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_in, 0 source(s)]
20:39:13.804153 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_ex, 0 source(s)]
20:39:14.436163 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_ex, 0 source(s)]
20:39:27.084143 IP (tos 0x0, ttl 64, id 23142, offset 0, flags [DF], proto UDP (17), length 48)
    10.64.118.122.39696 > 10.64.255.255.65001: UDP, length 20
20:39:27.294183 IP (tos 0x0, ttl 64, id 23160, offset 0, flags [DF], proto UDP (17), length 48)
    10.64.118.122.39696 > 10.64.255.255.65001: UDP, length 20
20:39:29.804150 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_in, 0 source(s)]
20:39:30.116129 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_in, 0 source(s)]
20:39:31.936179 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_ex, 0 source(s)]
20:39:32.096217 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_ex, 0 source(s)]
20:39:38.936296 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_in, 0 source(s)]
20:39:39.036194 IP (tos 0xc0, ttl 1, id 0, offset 0, flags [DF], proto IGMP (2), length 40, options (RA))
    10.64.118.122 > igmp.mcast.net: igmp v3 report, 1 group record(s) [gaddr 232.4.1.1 to_in, 0 source(s)]
20:39:42.410476 IP (tos 0x0, ttl 64, id 23445, offset 0, flags [DF], proto UDP (17), length 48)
    10.64.118.122.45475 > 10.64.255.255.65001: UDP, length 20
20:39:42.620791 IP (tos 0x0, ttl 64, id 23455, offset 0, flags [DF], proto UDP (17), length 48)
    10.64.118.122.45475 > 10.64.255.255.65001: UDP, length 20
^C
20 packets captured
20 packets received by filter
0 packets dropped by kernel
 
Last edited:
Have had recurrent issues with this as well. Eventually started using avahi on some of my VM's.
 
@vesalius can you explain more? What exactly did you mean that you use avahi? To use avahi as some multicast proxy?

I hope that someone can point me to some direction what I can try/check...
 
Yes as a multicast proxy across the dysfunctional Proxmox Linux bridge into my network. I use it for HomeBridge and scypted VM’s as multicast from neither of them work with homekit without avahi. Unfortunately, I can’t give you any generic setup advice as both those projects have specifically built in functionality to take advantage of a standard avahi/dbus install.
 
Turning off snooping on the bridge seems recommended as you did. Reading on the multicast_router setting, as this one was new to me, and it seems the default is 1, but maybe 2 might work better for you?

echo 2 > /sys/class/net/vmbrX/bridge/multicast_router
multicast_router
This allows the user to forcibly enable/disable ports as having multicast routers attached. A port with a multicast router will receive all multicast traffic.

The value 0 disables it completely. The default is 1 which lets the system automatically detect the presence of routers (currently this is limited to picking up queries), and 2 means that the ports will always receive all multicast traffic.
 
Thanks for your reply and hints.
Im try to setup:
Code:
:~# echo 0 > /sys/class/net/vmbr1/bridge/multicast_snooping
:~# echo 2 > /sys/class/net/vmbr1/bridge/multicast_router

But still do not work. Do I need to do something else when I make that changes(Like network restart or host reboot)? Because imediately after change nothing different happen...

Im check mikrotik switch settings and IGMP Snooping is enabled and use this settings:

1661363578511.png

And on switch as IGMP Querier is detected sfp-sfpplus1-FTTH which is WAN port(optical cable from ISP:

1661363745144.png

When I start VM on QNAP and start IPTV streamfrom client which call on server udp://@232.4.1.1:5002 I see that MDB in switch get record:

1661365016467.png

But when I turn on VM on Proxmox I imediately after boot get this:

1661365371006.png

When I try open stream on client It do not change and do not add record...
 
igmp protocol is used to filtering mutlicast traffic, to flood on all ports.

the igmp querier is broadcasting igmp packets, and devices doing multicast respond to the query, then when the switch see the response, it'll physical port to the igmp multicast address group

So, if igmp filtering is enabled on your switch, you need all devices reponding to igmp query. and you need to have the querier on the network where the multicast traffic is send.

if you can't disable igmp filtering on your mikrotik, you really need that vm repond to igmp query.

you can keep multicast snooping off on proxmox host bridge.

do you have check inside the vm with tcpdump that your receive igmp queries and that vm is sending igmp response ?
 
@spirit Thanks for reply.

I will try aditionaly describe everything...

IPTV Channels are streamed from ISP to STB(Box connected to TV) and they can be also streamed to TVHeadend server and it transcode it and stream it over TCP connection... so all that addresses like 232.4.1.1 are on ISP side...

Switch configuration has not changed for long time. Has same when I use odroid SBC as TVHeadend server to get IPTV channels from ISP and later move server to QNAP(QEMU/KVM) as VM, and now Im move(Migrate) VM to Proxmox. So VM on QNAP is same VM as one on Proxmox. Nothing has been changed... Only change is host(QNAP vs Proxmox) on both host's Im use virtio network card and same Mellanox physical card...

Disabling IGMP Snooping on switch do not help.

Im capture traffic on QNAP VM and on Proxmox VM.

VM tcpdump on iptv interface ens4 when running on Proxmox:

1661371004570.png

VM tcpdump on iptv interface ens4 when running on QNAP:

1661371788960.png

in the middle are a lot of packets for MPEG stream and few packets like this:

1661371837826.png

and then when I finish stream:

1661371909326.png


Only thing which I find that is different is IGMPv3(Proxmox) vs IGMPv2(QNAP)...
 
After some aditional testing Im set on VM force_igmp_version to 2
Code:
echo 2 > /proc/sys/net/ipv4/conf/ens4/force_igmp_version
sudo sysctl -p

and get this:

1661373025237.png

Now I get IGMPv2 but stream still didn't work.
 
Today Im try OVS Bridge and it work normally so there must be bug/issue with Linux Bridge and multicast in PVE ...

Linux bridge:
1672675421612.png

OVS bridge:
1672675458738.png

In virtual machine Im only change bridge interface used for video VLAN from vmbr1(Linux Bridge) to vmbr0(OVS Bridge) and streaming IPTV start working...
1672675618025.png

On switch eno1 and enp35s0 interfaces have same settings...
 
Agree, something is broken with multicast over proxmox Linux bridges.

I will test the couple VM/lxc currently needing multicast with OVS and report back. What does your ovs section look like from /etc/network/interfaces?
 
Last edited:
Hi.

Sorry for my English language. I use a translator.

Have you solved it yet? If not, you are doing everything right, except for 1 thing. VLAN AWARE.

You need to turn off multicast snooping on the network that you have tagged and you are doing it on an untagged bridge....

Create a new bridge for IPTV that will go directly to the tagged network and then turn off multicast on this bridge and use it for IPTV in the VM.
1678399102257.png

then

echo 1 > /sys/devices/virtual/net/vmbr3999/bridge/multicast_querier
echo 0 > /sys/class/net/vmbr3999/bridge/multicast_snooping

and add a post-up to the /etc/network/interfaces so that the settings are applied after the server is restarted

post-up ( echo 1 > /sys/devices/virtual/net/vmbr3999/bridge/multicast_querier )
post-up ( echo 0 > /sys/class/net/vmbr3999/bridge/multicast_snooping )

Ivan
 
Im solve it by using OVS Bridge.

Your way is not ok for me, because then my VM will have 2 interfaces one for IPTV traffic and another for LAN traffic(clients connect) ... This mean that I need reconfigure and edit all channels configurations one by one ... Disabling VLAN aware is also not option because other VM's need VLANS on same interface(soem VM's need access to multiple VLAN's)...
 
Hi.

When you have it solved, then ok. It's pointless.

You wouldn't have to change almost anything. After creating a new interface (bridge) in Proxmox, you would only have to change the used interface for the IPTV network in the VM configuration and turn off VLAN aware on it.

You could continue to use VLAN aware on other VMs.

The problem why you failed to apply multicast to network 3999 was that you do not have a system interface (bridge) for that network. QNAP had this interface. :-)

But that's just information, you've solved it, so it's fine.

Ivan
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!