SDN Mutlicast for vxlans

Mar 15, 2022
99
12
13
44
HI all

Currently I have a lab with a vxlan setup using unicast but I would like to know how I can move this to multicast as I would like to use multicast in production due to unicast the Linux docs say multicast is preferred.

has anybody got any experience either way with this in production?
 
Well, with multicast vxlan, the broadcast traffic (like arp) are only flooded once in the network.

with unicast vxlan, the brodcast traffic is send once for each vxlan tunnel between each vtep


But for normal unicast traffic, this is the same, the traffic is only send once between source and destination vtep.

(Note than multicast vxlan will not work across internet).


Currently, the sdn plugin don't have option to enable multicast.
 
Hello.

As i find out, some examples related to VXLAN and Multicast can be found here (didn't test it):
https://git.proxmox.com/?p=pve-docs.git;a=blob_plain;f=vxlan-and-evpn.adoc;hb=HEAD

As i understand the config, the difference is (inside VXLAN interface section):

In Unicast:

iface vxlan2 inet manual

. . .
vxlan_remoteip 192.168.0.2
vxlan_remoteip 192.168.0.3

. . .


While in Multicast config:

iface vxlan2 inet manual
. . .
vxlan-svcnodeip 225.20.1.1

. . .
 
Hello.

As i find out, some examples related to VXLAN and Multicast can be found here (didn't test it):
https://git.proxmox.com/?p=pve-docs.git;a=blob_plain;f=vxlan-and-evpn.adoc;hb=HEAD

As i understand the config, the difference is (inside VXLAN interface section):

In Unicast:

iface vxlan2 inet manual

. . .
vxlan_remoteip 192.168.0.2
vxlan_remoteip 192.168.0.3

. . .


While in Multicast config:

iface vxlan2 inet manual
. . .
vxlan-svcnodeip 225.20.1.1

. . .
you also need to specify physical interface with:
"vxlan-physdev ..."

I currently don't have implemented multicast, because I didn't find a simply way to define the physical interface on the zone, because it can be different on each node.

Do you really need multicast implementation ?

Note that if broadcast and bum traffic is really a problem for you, you can also use bgp-evpn. (it's using unicast, but mac address are exchanged through bgp, so no broadcast traffic)
 
Multicast within VXLAN is a very good option; nicely described here:
https://www.slideshare.net/enakai/how-vxlan-works-on-linux

And here (on VMware side, but it is the same as in may be in Proxmox VE) is the good introductions of the issues with/without VXLAN multicast:
https://blogs.vmware.com/vsphere/2013/05/vxlan-series-multicast-usage-in-vxlan-part-3.html
Yes, I known the differences and benefits.

but, do you really need it ? Do you already run vxlan on unicast and have problems ?
Do you need to use vxlan between a lot of nodes ? (because you won't see the difference with 10-20 nodes).

Because, multicast can sometime difficult to debug, need igmp queriers on your network, ...

and for bigger numbers of nodes, I'll recommend to use bgp-evpn.
 
Yes, I known the differences and benefits.

but, do you really need it ? Do you already run vxlan on unicast and have problems ?
Do you need to use vxlan between a lot of nodes ? (because you won't see the difference with 10-20 nodes).

Because, multicast can sometime difficult to debug, need igmp queriers on your network, ...

and for bigger numbers of nodes, I'll recommend to use bgp-evpn.
For now, i have no problems, but reading that it is in use in VMware and OpenStack, i was thinking that there must be a bigger reason for it (for example it is "lighter" on MAC learning etc.)
 
For now, i have no problems, but reading that it is in use in VMware and OpenStack, i was thinking that there must be a bigger reason for it (for example it is "lighter" on MAC learning etc.)
If you really want to go lighter for mac learning, you can use bgp-evpn, you'll have 0 learning flood. (because arp/np are filtered, and mac address are pushed to the differents hosts bridge through the bgp protocol)

About multicast, it's the older than unicast implementation, and it's was the only mode support by linux kernel < 3.9.
But it don't scale well, because you need 1 multicast group fo each vxlan id.
(and a lot of physical switches have a limited number of multicast groups)

about openstack:

https://wiki.openstack.org/wiki/L2population_blueprint#Handling_VXLAN_with_multicast
"
As explained earlier, multicast-based VXLAN is not really a feature as it wont scale, but it would anyway be necessary to support it for two puposes:

Some networks may have to be interconnected with 3rd party appliances which uses multicast-based VXLAN. For that puporse, having the ability to specify a multicast group as an extended provider attribute could be a good solution. To support broadcast emulation in pre-3.9 linux kernel VXLAN implementation, we’ll have to rely on multicast. For that purpose, providing the multicast group as an agent configuration parameter as proposed in vxlan-linuxbridge blueprint could provide a good migration path, as this option could be removed once all the agents support edge replication for broadcast emulation.

As OVS VXLAN implementation doesn’t support multicast for now, one solution could be to use iptables rules to map a virtual tunnel IP to a multicast address:"
 
  • Like
Reactions: jlebherz
If you really want to go lighter for mac learning, you can use bgp-evpn, you'll have 0 learning flood. (because arp/np are filtered, and mac address are pushed to the differents hosts bridge through the bgp protocol)

About multicast, it's the older than unicast implementation, and it's was the only mode support by linux kernel < 3.9.
But it don't scale well, because you need 1 multicast group fo each vxlan id.
(and a lot of physical switches have a limited number of multicast groups)

about openstack:

https://wiki.openstack.org/wiki/L2population_blueprint#Handling_VXLAN_with_multicast
"
As explained earlier, multicast-based VXLAN is not really a feature as it wont scale, but it would anyway be necessary to support it for two puposes:

Some networks may have to be interconnected with 3rd party appliances which uses multicast-based VXLAN. For that puporse, having the ability to specify a multicast group as an extended provider attribute could be a good solution. To support broadcast emulation in pre-3.9 linux kernel VXLAN implementation, we’ll have to rely on multicast. For that purpose, providing the multicast group as an agent configuration parameter as proposed in vxlan-linuxbridge blueprint could provide a good migration path, as this option could be removed once all the agents support edge replication for broadcast emulation.

As OVS VXLAN implementation doesn’t support multicast for now, one solution could be to use iptables rules to map a virtual tunnel IP to a multicast address:"
so what your saying then is keep away from openvswitch and we should be fine ?
 
so what your saying then is keep away from openvswitch and we should be fine ?
I just said that : multicast was used in past because it was the only implementation of vxlan.

multicast vxlan was never implemented is openvswitch. (only in linux bridge)

multicast is complex because you need 1 multicast address for each vni. (It can be hard to debug, and don't scale well with physical switch).

So unicast is better/simplier, and if you need to scale with a lot of hosts && vms (with a lof of broadcast traffic), you can use bgp-evpn to avoid broadcast && bum traffic.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!