Search results

  1. G

    question multicast/unicast proxmox v6

    Sorry, I don't understand. What do you mean by "supported limits" ? Maximum of 4 cluster nodes?
  2. G

    question multicast/unicast proxmox v6

    So it is ok to do unicast in a cluster running 12 nodes using corosync3/proxmox6?
  3. G

    question multicast/unicast proxmox v6

    On previous versions of proxmox the advice was not to use unicast in a cluster with more than 4 nodes: === Due to increased network traffic (compared to multicast) the number of supported nodes is limited, do not use it with more that 4 cluster nodes. source...
  4. G

    Losing quorum

    So what is your advice? Separate multicast traffic to another (internal) network over eth1 in stead of public eth0? My network-config, all on eth0: auto lo iface lo inet loopback iface eth0 inet manual iface eth1 inet manual auto vmbr0 iface vmbr0 inet static address 213.132.140.96...
  5. G

    Losing quorum

    Thanks spirit :-) totem.interface.0.mcastaddr (str) = 239.192.150.125 If I do an "omping -m 239.192.150.125" on all nodes, it looks all OK, on each node I see things like: pmnode12 : unicast, xmt/rcv/%loss = 9/9/0%, min/avg/max/std-dev = 0.640/1.462/3.053/0.759 pmnode12 : multicast...
  6. G

    Losing quorum

    I am doing some further troubleshooting, with omping (on all nodes) I see the following: virt011 : waiting for response msg virt012 : waiting for response msg virt013 : waiting for response msg virt014 : waiting for response msg virt016 : waiting for response msg virt017 : waiting for response...
  7. G

    Losing quorum

    all nodes are installed in a single rack and the physical switch acts as a igmp querier with igmp snooping enabled
  8. G

    Losing quorum

    I have a running cluster (5.4-13) with 11 nodes and everything is up and running until I reboot one of the nodes. Attached is the corosync.conf and the pvecm status. Everything looks OK. However, when I reboot one of the nodes, I am losing quorum on a random other node once the rebooted node is...
  9. G

    Question regarding cluster / quorate

    Dear reader, We have 11 nodes in a cluster and each node has 1 vote. Currently we are in the process of migrating this cluster to another datacentre, so a shutdown of all nodes is necessary. How can I make sure that I will get quorate once I will start up the nodes in the new location? Is it...
  10. G

    undo create cluster

    Thanks for this great (and fast) response. proxmox rocks :)
  11. G

    undo create cluster

    Dear T. Lamprecht, Thanks for your reply. It was a simple stand-alone and healty cluster so I removed the corosync.conf and files like described. Seem to work, cluster status is gone and everything still ok. One more question though. What happens when I reboot the server, will it start the...
  12. G

    undo create cluster

    I was playing with some settings and created a cluster doing: === root@prox1:~# pvecm create test-cluster-01 Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey. Writing corosync config to...
  13. G

    incorrect .vmlist

    Fixed by rebooting one of the existing nodes
  14. G

    incorrect .vmlist

    One of my existing nodes died, so I restored VM's on a different node using different vm-id's. If I check the status of my cluster with "pvecm nodes" and "pvecm status" everything seems to be ok. However, if I grep on the old node name (virt016) I still see the node listed in a file named...
  15. G

    ping result in DUP! messages

    Hello Richard, Thanks for looking into this issue. In my first post I did not put the real existing IP's. In my last post I did put the real existing IP's. They are in (logically) different subnets but physically at the same, correct! The netmask /24 should be correct as far as I know.
  16. G

    ping result in DUP! messages

    OK I have done a ping with count 250: 1) ping from 62.197.128.91 --> switch --> 62.197.128.109 (proxmox-server) 250 packets transmitted, 250 received, 0% packet loss, time 254983ms rtt min/avg/max/mdev = 0.086/0.203/0.317/0.046 ms ==> Everything seems to be OK 2) ping from 62.197.128.91 -->...
  17. G

    ping result in DUP! messages

    We are having an issue when doing a ping to a kvm VM, it results in DUP! replies. We are investigating this issue but did not find a solution yet. The situation is as follows: 1) ping from server1 to proxmox-node (no dup messages) 2) ping from server1 to VM on proxmox-node (DUP! messages) To...
  18. G

    wearout indicator

    We are running proxmox on intel ssd's for over a year now. Although there is not very much disk activity, the wearout indicator is still 0%, even after using it for about a year now and I'm wondering if this is correct? Attached are two screenshots of what we see in the GUI of proxmox. Can...
  19. G

    ZFS installation

    But if I choose ZFS from the installer, will the main OS be put on ZFS or not? Or will the installer create a separate partition for ZFS?