On previous versions of proxmox the advice was not to use unicast in a cluster with more than 4 nodes:
===
Due to increased network traffic (compared to multicast) the number of supported nodes is limited, do not use it with more that 4 cluster nodes.
source...
So what is your advice? Separate multicast traffic to another (internal) network over eth1 in stead of public eth0? My network-config, all on eth0:
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto vmbr0
iface vmbr0 inet static
address 213.132.140.96...
Thanks spirit :-)
totem.interface.0.mcastaddr (str) = 239.192.150.125
If I do an "omping -m 239.192.150.125" on all nodes, it looks all OK, on each node I see things like:
pmnode12 : unicast, xmt/rcv/%loss = 9/9/0%, min/avg/max/std-dev = 0.640/1.462/3.053/0.759
pmnode12 : multicast...
I am doing some further troubleshooting, with omping (on all nodes) I see the following:
virt011 : waiting for response msg
virt012 : waiting for response msg
virt013 : waiting for response msg
virt014 : waiting for response msg
virt016 : waiting for response msg
virt017 : waiting for response...
I have a running cluster (5.4-13) with 11 nodes and everything is up and running until I reboot one of the nodes. Attached is the corosync.conf and the pvecm status. Everything looks OK. However, when I reboot one of the nodes, I am losing quorum on a random other node once the rebooted node is...
Dear reader,
We have 11 nodes in a cluster and each node has 1 vote. Currently we are in the process of migrating this cluster to another datacentre, so a shutdown of all nodes is necessary.
How can I make sure that I will get quorate once I will start up the nodes in the new location? Is it...
Dear T. Lamprecht,
Thanks for your reply. It was a simple stand-alone and healty cluster so I removed the corosync.conf and files like described. Seem to work, cluster status is gone and everything still ok.
One more question though. What happens when I reboot the server, will it start the...
I was playing with some settings and created a cluster doing:
===
root@prox1:~# pvecm create test-cluster-01
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
Writing corosync config to...
One of my existing nodes died, so I restored VM's on a different node using different vm-id's. If I check the status of my cluster with "pvecm nodes" and "pvecm status" everything seems to be ok.
However, if I grep on the old node name (virt016) I still see the node listed in a file named...
Hello Richard,
Thanks for looking into this issue.
In my first post I did not put the real existing IP's. In my last post I did put the real existing IP's.
They are in (logically) different subnets but physically at the same, correct!
The netmask /24 should be correct as far as I know.
OK I have done a ping with count 250:
1) ping from 62.197.128.91 --> switch --> 62.197.128.109 (proxmox-server)
250 packets transmitted, 250 received, 0% packet loss, time 254983ms
rtt min/avg/max/mdev = 0.086/0.203/0.317/0.046 ms
==> Everything seems to be OK
2) ping from 62.197.128.91 -->...
We are having an issue when doing a ping to a kvm VM, it results in DUP! replies. We are investigating this issue but did not find a solution yet. The situation is as follows:
1) ping from server1 to proxmox-node (no dup messages)
2) ping from server1 to VM on proxmox-node (DUP! messages)
To...
We are running proxmox on intel ssd's for over a year now. Although there is not very much disk activity, the wearout indicator is still 0%, even after using it for about a year now and I'm wondering if this is correct? Attached are two screenshots of what we see in the GUI of proxmox. Can...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.