I have a VLAN provided by Juniper Switches hosted at Hetzner (ZA). The have IGMP snooping enabled so I need to have multicast querier running. However, I run into a problem when installing Proxmox 5.2 (note: this may be similar under previous versions, but I have my own servers and switches in the order clusters, so IGMP snooping doesn't apply)
When I check if multicast is available with "corosync-cmapctl -g totem.interface.0.mcastaddr", I get an error, since corosync is not configured yet.
Also
So multicast packets are not allowed by multicast querier.
However, as soon as I configure corosync on the first node with "pvecm create <clustername>, I can run omping and corosync-capctl.
The problem is that I can't add the 2nd or 3rd node to the initial node without first somehow configuring corosync. If I add the node anyway with "pvecm add yster4" from yster5 (with yster4 being the first node), it can't reach quorum because somehow yster4 looses it's ability to receive the multicast packets. Leaving yster5 running, if I reboot yster4, as soon as yster 4 is back, yster5 thinks it has achieved quorum, but it actually hasn't. Now mutlicast packets are not received by yster4 and omping shows all packets are lost.
From yster4:
and yster5:
Is there a way to get past this and "activate" (for lack of a better term) multicast querier properly?
I have for now changed the corosync.conf to instruct totem to use udpu as transport.
My network config shows multicast querier as active.
HOWEVER, if I disable snooping, then things change.
According to this thread, the problem should be fixed? https://unix.stackexchange.com/ques...cast-snooping-and-why-does-it-break-upnp-dlna
When I check if multicast is available with "corosync-cmapctl -g totem.interface.0.mcastaddr", I get an error, since corosync is not configured yet.
Code:
root@yster5:~# corosync-cmapctl -g totem.interface.0.mcastaddr
Failed to initialize the cmap API. Error CS_ERR_LIBRARY
Also
Code:
root@yster5:~# omping -c 600 -i 1 -q yster4 yster5
yster4 : waiting for response msg
yster4 : joined (S,G) = (*, 232.43.211.234), pinging
yster4 : waiting for response msg
yster4 : server told us to stop
yster4 : unicast, xmt/rcv/%loss = 31/31/0%, min/avg/max/std-dev = 0.086/0.169/0.218/0.034
yster4 : multicast, xmt/rcv/%loss = 31/0/100%, min/avg/max/std-dev = 0.000/0.000/0.000/0.000
So multicast packets are not allowed by multicast querier.
However, as soon as I configure corosync on the first node with "pvecm create <clustername>, I can run omping and corosync-capctl.
The problem is that I can't add the 2nd or 3rd node to the initial node without first somehow configuring corosync. If I add the node anyway with "pvecm add yster4" from yster5 (with yster4 being the first node), it can't reach quorum because somehow yster4 looses it's ability to receive the multicast packets. Leaving yster5 running, if I reboot yster4, as soon as yster 4 is back, yster5 thinks it has achieved quorum, but it actually hasn't. Now mutlicast packets are not received by yster4 and omping shows all packets are lost.
From yster4:
Code:
root@yster4:~# omping -c 600 -i 1 -q yster4 yster5
yster5 : waiting for response msg
yster5 : waiting for response msg
yster5 : waiting for response msg
yster5 : joined (S,G) = (*, 232.43.211.234), pinging
^C
yster5 : unicast, xmt/rcv/%loss = 311/311/0%, min/avg/max/std-dev = 0.075/0.167/0.235/0.039
yster5 : multicast, xmt/rcv/%loss = 311/0/100%, min/avg/max/std-dev = 0.000/0.000/0.000/0.000
and yster5:
Code:
root@yster5:~# omping -c 600 -i 1 -q yster4 yster5
yster4 : waiting for response msg
yster4 : joined (S,G) = (*, 232.43.211.234), pinging
yster4 : waiting for response msg
yster4 : server told us to stop
yster4 : unicast, xmt/rcv/%loss = 312/312/0%, min/avg/max/std-dev = 0.072/0.167/0.218/0.034
yster4 : multicast, xmt/rcv/%loss = 312/0/100%, min/avg/max/std-dev = 0.000/0.000/0.000/0.000
Is there a way to get past this and "activate" (for lack of a better term) multicast querier properly?
I have for now changed the corosync.conf to instruct totem to use udpu as transport.
My network config shows multicast querier as active.
HOWEVER, if I disable snooping, then things change.
Code:
root@yster5:~# cat /sys/devices/virtual/net/vmbr0/bridge/multicast_querier
1
root@yster4:~# cat /sys/class/net/vmbr0/bridge/multicast_snooping
0
root@yster4:~# omping -c 100 -i 1 -q yster4 yster5
yster5 : waiting for response msg
yster5 : waiting for response msg
yster5 : joined (S,G) = (*, 232.43.211.234), pinging
yster5 : given amount of query messages was sent
yster5 : unicast, xmt/rcv/%loss = 100/100/0%, min/avg/max/std-dev = 0.069/0.171/0.217/0.036
yster5 : multicast, xmt/rcv/%loss = 100/100/0%, min/avg/max/std-dev = 0.077/0.191/0.266/0.044
Code:
root@yster5:~# cat /sys/class/net/vmbr0/bridge/multicast_snooping
o
root@yster5:~# omping -c 100 -i 1 -q yster4 yster5
yster4 : waiting for response msg
yster4 : joined (S,G) = (*, 232.43.211.234), pinging
yster4 : given amount of query messages was sent
yster4 : unicast, xmt/rcv/%loss = 100/100/0%, min/avg/max/std-dev = 0.087/0.170/0.212/0.032
yster4 : multicast, xmt/rcv/%loss = 100/99/1% (seq>=2 0%), min/avg/max/std-dev = 0.092/0.192/0.261/0.040
According to this thread, the problem should be fixed? https://unix.stackexchange.com/ques...cast-snooping-and-why-does-it-break-upnp-dlna