Recent content by gijsbert

  1. G

    question multicast/unicast proxmox v6

    Sorry, I don't understand. What do you mean by "supported limits" ? Maximum of 4 cluster nodes?
  2. G

    question multicast/unicast proxmox v6

    So it is ok to do unicast in a cluster running 12 nodes using corosync3/proxmox6?
  3. G

    question multicast/unicast proxmox v6

    On previous versions of proxmox the advice was not to use unicast in a cluster with more than 4 nodes: === Due to increased network traffic (compared to multicast) the number of supported nodes is limited, do not use it with more that 4 cluster nodes. source...
  4. G

    Losing quorum

    So what is your advice? Separate multicast traffic to another (internal) network over eth1 in stead of public eth0? My network-config, all on eth0: auto lo iface lo inet loopback iface eth0 inet manual iface eth1 inet manual auto vmbr0 iface vmbr0 inet static address 213.132.140.96...
  5. G

    Losing quorum

    Thanks spirit :-) totem.interface.0.mcastaddr (str) = 239.192.150.125 If I do an "omping -m 239.192.150.125" on all nodes, it looks all OK, on each node I see things like: pmnode12 : unicast, xmt/rcv/%loss = 9/9/0%, min/avg/max/std-dev = 0.640/1.462/3.053/0.759 pmnode12 : multicast...
  6. G

    Losing quorum

    I am doing some further troubleshooting, with omping (on all nodes) I see the following: virt011 : waiting for response msg virt012 : waiting for response msg virt013 : waiting for response msg virt014 : waiting for response msg virt016 : waiting for response msg virt017 : waiting for response...
  7. G

    Losing quorum

    all nodes are installed in a single rack and the physical switch acts as a igmp querier with igmp snooping enabled
  8. G

    Losing quorum

    I have a running cluster (5.4-13) with 11 nodes and everything is up and running until I reboot one of the nodes. Attached is the corosync.conf and the pvecm status. Everything looks OK. However, when I reboot one of the nodes, I am losing quorum on a random other node once the rebooted node is...
  9. G

    Question regarding cluster / quorate

    Dear reader, We have 11 nodes in a cluster and each node has 1 vote. Currently we are in the process of migrating this cluster to another datacentre, so a shutdown of all nodes is necessary. How can I make sure that I will get quorate once I will start up the nodes in the new location? Is it...
  10. G

    undo create cluster

    Thanks for this great (and fast) response. proxmox rocks :)
  11. G

    undo create cluster

    Dear T. Lamprecht, Thanks for your reply. It was a simple stand-alone and healty cluster so I removed the corosync.conf and files like described. Seem to work, cluster status is gone and everything still ok. One more question though. What happens when I reboot the server, will it start the...
  12. G

    undo create cluster

    I was playing with some settings and created a cluster doing: === root@prox1:~# pvecm create test-cluster-01 Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey. Writing corosync config to...
  13. G

    incorrect .vmlist

    Fixed by rebooting one of the existing nodes
  14. G

    incorrect .vmlist

    One of my existing nodes died, so I restored VM's on a different node using different vm-id's. If I check the status of my cluster with "pvecm nodes" and "pvecm status" everything seems to be ok. However, if I grep on the old node name (virt016) I still see the node listed in a file named...