Hello!
I have a two node cluster.
NodeA in the cluster does not appear to have a multicast address. When running the following command this is the output:
# corosync-cmapctl -g totem.interface.0.mcastaddr
Can't get key totem.interface.0.mcastaddr. Error CS_ERR_NOT_EXIST
NodeB will show a multicast address when running this command.
NodeB was joined into NodeA when the cluster was created..
Also, when using the web GUI, you can manage the entire cluster from NodeA without issues, but when using the web GUI when logged into NodeB, you cannot manage NodeA. You will see various loading screens that eventually time out.
It does not appear that multicast is working. Running test with omping and the like show errors. I believe this may be why.
Any way to force NodeA to pull a multicast address? Or, am I understanding something incorrectly?
This error was mentioned before in a thread. The OP eventually upgraded his cluster to the next .X version and the error went away. No such luck in my case.
One item to mention.. I have never rebooted NodeA after the cluster was created. There are production VMs running on this machine, so reboots are something to be prevented at all costs whenever possible.
Any ideas? Thank you in advance.
BTW.. I have looked into Multi-cast related items and have the following information:
- My switch supports Multicast and IGMP snooping, but does not support the querier service.
- I enabled the querier on the Linux Bridge as outlined in the Multicast Notes Proxmox KB. Not sure if this is the reason? I thought NodeA should still have an address.
If multicast is the problem, I could switch to unicast for the cluster, but I would like to find a way to do it that would not require a reboot if possible.
I have a two node cluster.
NodeA in the cluster does not appear to have a multicast address. When running the following command this is the output:
# corosync-cmapctl -g totem.interface.0.mcastaddr
Can't get key totem.interface.0.mcastaddr. Error CS_ERR_NOT_EXIST
NodeB will show a multicast address when running this command.
NodeB was joined into NodeA when the cluster was created..
Also, when using the web GUI, you can manage the entire cluster from NodeA without issues, but when using the web GUI when logged into NodeB, you cannot manage NodeA. You will see various loading screens that eventually time out.
It does not appear that multicast is working. Running test with omping and the like show errors. I believe this may be why.
Any way to force NodeA to pull a multicast address? Or, am I understanding something incorrectly?
This error was mentioned before in a thread. The OP eventually upgraded his cluster to the next .X version and the error went away. No such luck in my case.
One item to mention.. I have never rebooted NodeA after the cluster was created. There are production VMs running on this machine, so reboots are something to be prevented at all costs whenever possible.
Any ideas? Thank you in advance.
BTW.. I have looked into Multi-cast related items and have the following information:
- My switch supports Multicast and IGMP snooping, but does not support the querier service.
- I enabled the querier on the Linux Bridge as outlined in the Multicast Notes Proxmox KB. Not sure if this is the reason? I thought NodeA should still have an address.
If multicast is the problem, I could switch to unicast for the cluster, but I would like to find a way to do it that would not require a reboot if possible.