Multicast

adamb

Famous Member
Mar 1, 2012
1,323
73
113
Am I correct in thinking that proxmox uses multicast by default?

I am having issues with corosync re transmissions and decided to test my multicast traffic. On my DRBD/Cluster network I have no multicast. How could this occur on a dedicated network with no switch in-between The interface of my lan/management has no issues with multicast and this is going over a number of my lan switches. Then I have to wonder, how in the world are my clusters even working if multicast doesn't work on my DRBD/Cluster network.

root@proxmox2:~# asmping 224.0.2.1 10.211.47.1
asmping joined (S,G) = (*,224.0.2.234)
pinging 10.211.47.1 from 10.211.47.2
unicast from 10.211.47.1, seq=1 dist=0 time=0.235 ms
unicast from 10.211.47.1, seq=2 dist=0 time=0.204 ms
unicast from 10.211.47.1, seq=3 dist=0 time=0.219 ms
unicast from 10.211.47.1, seq=4 dist=0 time=0.104 ms
unicast from 10.211.47.1, seq=5 dist=0 time=0.227 ms

root@proxmox2:~# asmping 224.0.2.1 10.80.12.125
asmping joined (S,G) = (*,224.0.2.234)
pinging 10.80.12.125 from 10.80.12.130
unicast from 10.80.12.125, seq=1 dist=0 time=1.218 ms
multicast from 10.80.12.125, seq=1 dist=0 time=1.236 ms
unicast from 10.80.12.125, seq=2 dist=0 time=0.287 ms
multicast from 10.80.12.125, seq=2 dist=0 time=0.272 ms
unicast from 10.80.12.125, seq=3 dist=0 time=0.268 ms
multicast from 10.80.12.125, seq=3 dist=0 time=0.253 ms
root@proxmox2:~# cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster config_version="10" name="proxmox">
<cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<quorumd allow_kill="0" interval="3" label="proxmox_qdisk" master_wins="1" tko="10"/>
<totem token="54000"/>
<fencedevices>
<fencedevice agent="fence_ipmilan" ipaddr="10.80.12.126" lanplus="1" login="USERID" name="ipmi1" passwd="PASSW0RD" power_wait="5"/>
<fencedevice agent="fence_ipmilan" ipaddr="10.80.12.131" lanplus="1" login="USERID" name="ipmi2" passwd="PASSW0RD" power_wait="5"/>
</fencedevices>
<clusternodes>
<clusternode name="proxmox1" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="ipmi1"/>
</method>
</fence>
</clusternode>
<clusternode name="proxmox2" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="ipmi2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="105"/>
<pvevm autostart="1" vmid="103"/>
<pvevm autostart="1" vmid="101"/>
<pvevm autostart="1" vmid="100"/>
<pvevm autostart="1" vmid="104"/>
</rm>
</cluster>

root@proxmox1:~# pvecm s
Version: 6.2.0
Config Version: 10
Cluster Name: proxmox
Cluster Id: 14330
Cluster Member: Yes
Cluster Generation: 652
Membership state: Cluster-Member
Nodes: 2
Expected votes: 3
Quorum device votes: 0
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 8
Flags:
Ports Bound: 0 177 178
Node name: proxmox1
Node ID: 1
Multicast addresses: 239.192.55.50
Node addresses: 10.211.47.1

root@proxmox2:~# pvecm s
Version: 6.2.0
Config Version: 10
Cluster Name: proxmox
Cluster Id: 14330
Cluster Member: Yes
Cluster Generation: 652
Membership state: Cluster-Member
Nodes: 2
Expected votes: 3
Quorum device votes: 1
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 8
Flags:
Ports Bound: 0 177 178
Node name: proxmox2
Node ID: 2
Multicast addresses: 239.192.55.50
Node addresses: 10.211.47.2
 
Yes, you need working multicast by default.

I guess asmping isn't the best tool. Looks like the multicast traffic is there.

root@proxmox1:~# tcpdump -i eth0 port 5405
tcpdump: WARNING: eth0: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
15:36:52.822626 IP proxmox1.com.5404 > 239.192.55.50.5405: UDP, length 75
15:36:52.822683 IP proxmox1.com.5404 > 10.211.47.2.5405: UDP, length 107
15:36:52.822780 IP 10.211.47.2.5404 > proxmox1.com.5405: UDP, length 107
15:36:52.822841 IP proxmox1.com.5404 > 239.192.55.50.5405: UDP, length 1473
15:36:52.822871 IP proxmox1.com.5404 > 239.192.55.50.5405: UDP, length 1473
15:36:52.822907 IP proxmox1.com.5404 > 239.192.55.50.5405: UDP, length 1473
15:36:52.822935 IP proxmox1.com.5404 > 239.192.55.50.5405: UDP, length 1473
15:36:52.822951 IP proxmox1.com.5404 > 239.192.55.50.5405: UDP, length 566
15:36:52.822965 IP proxmox1.com.5404 > 10.211.47.2.5405: UDP, length 107
15:36:52.823167 IP 10.211.47.2.5404 > proxmox1.com.5405: UDP, length 107
15:36:52.823208 IP proxmox1.com.5404 > 10.211.47.2.5405: UDP, length 107
15:36:52.823325 IP 10.211.47.2.5404 > proxmox1.com.5405: UDP, length 107
15:36:52.823369 IP proxmox1.com.5404 > 10.211.47.2.5405: UDP, length 107
15:36:52.823501 IP 10.211.47.2.5404 > proxmox1.com.5405: UDP, length 107
15:36:52.823533 IP proxmox1.com.5404 > 10.211.47.2.5405: UDP, length 107
 
you also use omping (https://fedorahosted.org/omping/)
you can install it from our pvetest repository.

Appreciate the input. Results don't look much better. Still looks like multicast is broke, but it obviously is working.

root@proxmox1:~# omping 10.211.47.2 10.211.47.1
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
^C
10.211.47.2 : response message never received
root@proxmox1:~# omping proxmox1 proxmox2
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
^C
proxmox2 : response message never received
 
Looks like the same issue with omping.

root@proxmox1:~# omping 10.211.47.2 10.211.47.1
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
^C
10.211.47.2 : response message never received
root@proxmox1:~# omping proxmox1 proxmox2
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : response message never received
 
you also use omping (https://fedorahosted.org/omping/)
you can install it from our pvetest repository.

Looks like the same issue with omping.

root@proxmox1:~# omping 10.211.47.2 10.211.47.1
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
^C
10.211.47.2 : response message never received
root@proxmox1:~# omping proxmox1 proxmox2
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : waiting for response msg
proxmox2 : response message never received
 
Looks like multicast is broke using omping also. Not adding up.

root@proxmox1:~# omping 10.211.47.2 10.211.47.1
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg
10.211.47.2 : waiting for response msg

It also seems that when I quote someone in a post, my post then has to be "approved by an administrator". If I don't quote, it doesn't have to be approved. Any ideas on that?
 
Last edited:
Looks like multicast is broke using omping also. Not adding up.



It also seems that when I quote someone in a post, my post then has to be "approved by an administrator". If I don't quote, it doesn't have to be approved. Any ideas on that?

there are multiple reasons why a post went to a moderation queue (e.g. spam protection). in any case, never do double posts. this will just lead a bigger moderation queue and therefore longer time to show your posts.
 
not really. but we are faced with a lot of networks not supporting IP multicast. a lot of issues are due to wrong configured switches regarding IP multicast.
 
sometimes its a challenge to find the cause of network issues or the bug.

just to mention, if you can´t figure it out, you can think of getting help from our commercial support team. almost always they find the cause of issues.
 
Thats what I seem to be reading. In my situation though, there is no switch.
You are sure about that no iptables rules are active?
You tcpdump does not prove that you have a working multicast, in fact it shows that only unicast packages is flowing between your nodes since multicast packages will be addressed to a multicast group and not a specific host.
 
Exactly what I was looking to hear! From the looks of my iptables on both nodes, all UDP is allowed. The output of iptables --list is identical on both nodes. At this point I wouldn't mind if iptables was disabled all together.

root@proxmox1:~# iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere ADDRTYPE match dst-type MULTICAST
ACCEPT udp -- anywhere anywhere state NEW multiport dports 5404,5405


Chain FORWARD (policy ACCEPT)
target prot opt source destination


Chain OUTPUT (policy ACCEPT)
target prot opt source destination
 
IMHO you are missing a rule for established traffic:
ACCEPT udp -- anywhere anywhere state RELATED,ESTABLISHED

Appreciate the input!

I added the following rule

iptables -A INPUT -p udp -m state --state RELATED,ESTABLISHED -j ACCEPT

root@proxmox2:~# iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere state RELATED,ESTABLISHED


Chain FORWARD (policy ACCEPT)
target prot opt source destination


Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Multicast still seems broke on my 10GB interfaces. What kills me is its fine on my LAN which goes through a number of switches.

root@proxmox2:~# asmping 224.0.2.1 10.211.47.1
asmping joined (S,G) = (*,224.0.2.234)
pinging 10.211.47.1 from 10.211.47.2
unicast from 10.211.47.1, seq=1 dist=0 time=0.250 ms
unicast from 10.211.47.1, seq=2 dist=0 time=0.190 ms
unicast from 10.211.47.1, seq=3 dist=0 time=0.221 ms
unicast from 10.211.47.1, seq=4 dist=0 time=0.228 ms
unicast from 10.211.47.1, seq=5 dist=0 time=0.229 ms
unicast from 10.211.47.1, seq=6 dist=0 time=0.199 ms

root@proxmox2:~# asmping 224.0.2.1 10.80.12.125
asmping joined (S,G) = (*,224.0.2.234)
pinging 10.80.12.125 from 10.80.12.130
unicast from 10.80.12.125, seq=1 dist=0 time=1.289 ms
multicast from 10.80.12.125, seq=1 dist=0 time=1.305 ms
unicast from 10.80.12.125, seq=2 dist=0 time=0.249 ms
multicast from 10.80.12.125, seq=2 dist=0 time=0.234 ms
unicast from 10.80.12.125, seq=3 dist=0 time=0.206 ms
multicast from 10.80.12.125, seq=3 dist=0 time=0.220 ms
 
Appreciate the input!

I added the following rule

iptables -A INPUT -p udp -m state --state RELATED,ESTABLISHED -j ACCEPT

root@proxmox2:~# iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere state RELATED,ESTABLISHED


Chain FORWARD (policy ACCEPT)
target prot opt source destination


Chain OUTPUT (policy ACCEPT)
target prot opt source destination
What kills me is that multicast is working AOK on my lan network which goes through a number of switches.

root@proxmox2:~# asmping 224.0.2.1 10.80.12.125
asmping joined (S,G) = (*,224.0.2.234)
pinging 10.80.12.125 from 10.80.12.130
unicast from 10.80.12.125, seq=1 dist=0 time=1.289 ms
multicast from 10.80.12.125, seq=1 dist=0 time=1.305 ms
unicast from 10.80.12.125, seq=2 dist=0 time=0.249 ms
multicast from 10.80.12.125, seq=2 dist=0 time=0.234 ms
unicast from 10.80.12.125, seq=3 dist=0 time=0.206 ms
multicast from 10.80.12.125, seq=3 dist=0 time=0.220 ms

root@proxmox2:~# asmping 224.0.2.1 10.211.47.1
asmping joined (S,G) = (*,224.0.2.234)
pinging 10.211.47.1 from 10.211.47.2
unicast from 10.211.47.1, seq=1 dist=0 time=0.250 ms
unicast from 10.211.47.1, seq=2 dist=0 time=0.190 ms
unicast from 10.211.47.1, seq=3 dist=0 time=0.221 ms
unicast from 10.211.47.1, seq=4 dist=0 time=0.228 ms
unicast from 10.211.47.1, seq=5 dist=0 time=0.229 ms
unicast from 10.211.47.1, seq=6 dist=0 time=0.199 ms
 
Last edited:
if you flush all INPUT rules does it work then? (iptables -F INPUT)

What output do you get from this (before you flush of course:cool:)
iptables -S
 
Here it is. Didn't seem to make a difference.

root@proxmox1:~# iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -p udp -m state --state RELATED,ESTABLISHED -j ACCEPT
root@proxmox1:~# iptables -F INPUT
root@proxmox1:~# iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT

root@proxmox2:~# asmping 224.0.2.1 10.211.47.1
asmping joined (S,G) = (*,224.0.2.234)
pinging 10.211.47.1 from 10.211.47.2
unicast from 10.211.47.1, seq=1 dist=0 time=0.188 ms
unicast from 10.211.47.1, seq=2 dist=0 time=0.210 ms
unicast from 10.211.47.1, seq=3 dist=0 time=0.165 ms
unicast from 10.211.47.1, seq=4 dist=0 time=0.185 ms
unicast from 10.211.47.1, seq=5 dist=0 time=0.209 ms
unicast from 10.211.47.1, seq=6 dist=0 time=0.209 ms
unicast from 10.211.47.1, seq=7 dist=0 time=0.205 ms
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!