Search results

  1. V

    VM restarting when no quorum

    To add, on the datacenter / HA page, there is quorum and the 2 nodes, and the other is master, other is lrm. I do not have any resources added in the panel, but because we are having a cluster of some kind, aren't there always some quorum, even with no HA?
  2. V

    VM restarting when no quorum

    Hello and thanks Leo, Is this the only supported way of having 2 nodes? Yes, I do have HA disabled. If the node is fencing itself, is it normal that all the VM:s reboot? Veikko
  3. V

    VM restarting when no quorum

    Hi all ! I have a 2-node setup, and I discovered slowness in the VM:s on the other node. During the initial tests I realized that there was a network speed issue which mirrored to all the VM:s in the current node. I then migrated all VM:s to the other node and shut down the faulty node for...
  4. V

    Resetting CEPH warnings

    Thank you a lot. Clear as daylight.
  5. V

    Resetting CEPH warnings

    Hi all! I recently updated my cluster to 6.1 and did a CEPH update at the same time. Everything went smoothly, but one monitor crashed during the setup. It was nothing special, and everything works perfectly. Anyhow, since that my cluster has been "Health_warn" state because of an error "1...
  6. V

    I broke my quorum, mis-configured corosync.conf

    And now I answer my own question. Just remembered how it's done: systemctl stop pve-cluster /usr/bin/pmxcfs -l [main] notice: forcing local mode (althought corosync.conf exists) Then fix the configuration to the cluster file, and the cluster heals. I have done it a few times when...
  7. V

    I broke my quorum, mis-configured corosync.conf

    This is from the manual: "This is not enough if corosync cannot start anymore. Here it is best to edit the local copy of the corosync configuration in /etc/corosync/corosync.conf so that corosync can start again. Ensure that on all nodes this configuration has the same content to avoid split...
  8. V

    I broke my quorum, mis-configured corosync.conf

    Hi! I am upgrading from corosync 2 to corosync 3. I used the script to check that my settings are OK. I have a 2-ring corosync setup, and 2 of my older nodes were having the corosync ring addresses in the host file, and in the corosync.conf the host name was in use. All had worked ok, but the...
  9. V

    Cluster slowing down and cutting connection - pvesr cpu 100%

    I have now shut down the defective node, as the pvesr stalled the whole machine, making ceph remarkable slow and thus slowing down all the vm's. Now ceph is in degraded state (3/1 configuration so no data loss problems). I will fire up the defected node once more, and try to gather some...
  10. V

    Cluster slowing down and cutting connection - pvesr cpu 100%

    Hi! Now I got a little more info about it; After restart my nodes seem to work fine about 5-10min. Then there's a process pvesr which start to build up, and quickly takes over all processing cores to 100%. I searched the forums, and there has been a similar case in feb, but no solution there...
  11. V

    Cluster slowing down and cutting connection - pvesr cpu 100%

    Hi! I just updated a 3-node cluster to the latest. I'm using community support, so it's 5.4-5. Detailed info below. Quickly after updating, I found out that I cannot start a container in one of the nodes. I was migrating the vm's off from the nodes prior to restarting, and found out when I...
  12. V

    Long IOwait on CEPH, rbd device stuck on 100%

    OK, I see. There had to be something wrong because there was 5 rbd devices mapped and only 2 containers running on the host. It seems that a CT maps rbd like that, a VM does not. One of the CT machines was hung up, so I think that was the reason of the 100% utilization of the one rbd mount...
  13. V

    Long IOwait on CEPH, rbd device stuck on 100%

    Hi there! I think I am having a configuration issue on my cluster. It is a 3-node system with 12 OSDs and a separate DB SSD per node. The iowait on one of the hosts is way up, to 15%, while the others have around 2%. Using iostat -x, I see 5 additional devices on that node, which are "rbd0-4"...
  14. V

    Unable to start VM:s

    Hello! I did all of that. Still no success: () 2017-12-13 16:37:58 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve01' root@10.10.10.51 /bin/true 2017-12-13 16:37:58 Host key verification failed. 2017-12-13 16:37:58 ERROR: migration aborted (duration 00:00:00): Can't connect to...
  15. V

    Unable to start VM:s

    Source host first: root@pve2:~# cat /etc/hosts 127.0.0.1 localhost.localdomain localhost # New nodes 10.10.10.51 pve01.toastpost.com pve01 10.10.10.52 pve02.toastpost.com pve02 10.10.10.53 pve03.toastpost.com pve03 10.10.10.54 pve04.toastpost.com pve04 192.168.2.51 pve01-corosync-r0.com...
  16. V

    Unable to start VM:s

    I'm afraid that's not the case. I am able to ssh from every node to every node, but the problem still persists.
  17. V

    Unable to start VM:s

    Hi all! I recently had a problem starting up a VM on an updated cluster (4.4 to 5.1). I did everything according to the guides, first updating and rebalancing the ceph pool and then doing a dist-upgrade one node at a time. Everything is documented very well and the process is straightforward...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!