Search results

  1. R

    PVE 5 HA cluster with iSCSI multipath shared storage

    Hi, Creating an environment here is the details. HP blade servers 3 node HA cluster SAN iSCSI multipath shared storage 2 10gb NIC making them bond my question is is it enough to create cluster bond (2 10gb NIC) or is there any recommendations to avoid bottleneck/latency issues. Thanks
  2. R

    PVE 5 : Three Node HA Cluster (Down to one Node)

    After added auto_tie_breaker, last_man_standing & last_man_standing_window in /etc/pve/corosync.conf. i have corrupted my cluster environment. node1 shows cannot initialize cmap service error. tried everything to resolve it but failed, node2 is active after pvecm expected 1 & holds all vms OK...
  3. R

    PVE 5 Rejecting I/O to offline device

    checked devices LOGGED IN one more thing is found that when this happened we receive ping drops continiously. may be due to ping drops /timeout we are getting these error. but dont know why i am receiving timeout on a single node :(
  4. R

    PVE 5 : Three Node HA Cluster (Down to one Node)

    Hi all, I have setup 3 nodes HA cluster using PVE 5, and trying to achieve that if 2 nodes down, quorum do not break. and all the vms moved to last standing node. i believe it is possible but my understanding is not clear as i am new to PVE. i have found there are few feature available but not...
  5. R

    how to roll back / remove updates from p-ve: 5.0-21 to pve: 5.0-18

    yes HA configured, and as you said turning off 2nodes(node2 & node3) node1 reboot. after checking logs before rebooting node1 shows client watchdog expired. but when i down node1 & node2, node3 did not restart same happend with node2. but node1 getting reboot automatically. Note: we have created...
  6. R

    how to roll back / remove updates from p-ve: 5.0-21 to pve: 5.0-18

    switched to old kernal but now every time i turned off node2 or node3 and run cluster on a single one its getting reboot automatically did'nt understand what is going on :[
  7. R

    how to roll back / remove updates from p-ve: 5.0-21 to pve: 5.0-18

    actually i am facing problem while power off 2 nodes in a 3 node cluster, my last node behaving abnormally showing "rejecting I/O error". see below post. after enabling the qourum by command "pvecm expected 1" & reboot the last node. it work normally...
  8. R

    how to roll back / remove updates from p-ve: 5.0-21 to pve: 5.0-18

    Hi, I want to revert back pve updates to previous state, please guide Current Version #pveverison -v proxmox-ve: 5.0-21 (running kernel: 4.10.17-3-pve) pve-manager: 5.0-31 (running version: 5.0-31/27769b1f) pve-kernel-4.10.17-2-pve: 4.10.17-20 pve-kernel-4.10.15-1-pve: 4.10.15-15...
  9. R

    PVE 5 Rejecting I/O to offline device

    Hi PVE Members, I am using PVE 5, 3 nodes cluster attached with iSCSI multipath LVM shared storage, receiving Rejecting I/O to offline device error. please advise. multipath -ll