Proxmox VE 4.0 beta1 released!

Discussion in 'Proxmox VE: Installation and configuration' started by martin, Jun 23, 2015.

Thread Status:
Not open for further replies.
  1. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    4,598
    Likes Received:
    306
    All new Mainboards have hardware watchdogs so there is normally no need for extra device. If not softdog is a part of the kernel and works.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  2. amartin

    amartin New Member

    Joined:
    Jun 25, 2015
    Messages:
    5
    Likes Received:
    0
    Would the hardware watchdog work in conjunction with shared storage fencing, e.g how Pacemaker uses sbd?

    In any of these cases, how do the other nodes know for certain that the rogue node has been killed? Do they simply wait for the watchdog timer period and then after that assume that the node has been killed?
     
  3. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    4,598
    Likes Received:
    306
    It doesn't matter if the the node is not reachable from the cluster or off.
    consider if the rest cluster has quorum then the cluster know everything is ok and the one node what is missing is not ok.
    so after a time widows (dependence on different thinks) the cluster will believe the node is down (self fencing).
    if no one has quorum it doesn't matter how the real state is, there are no node how can make decision.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,432
    Likes Received:
    299
    We use a distributed locking mechanism, combined with the watchdog feature.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. amartin

    amartin New Member

    Joined:
    Jun 25, 2015
    Messages:
    5
    Likes Received:
    0

    The concern I have is if the rogue node were to "recover" from the failed state and then start writing bad data to the shared storage (e.g DRBD, ceph, NFS, etc) at the same time as the new node that took over its VMs. If this were to occur, wouldn't it result in two nodes writing data to the same VM image files at once and thus corrupting them?
     
  6. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,432
    Likes Received:
    299
    Our software solves exactly that problem. Or what do you think the software is for?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. amartin

    amartin New Member

    Joined:
    Jun 25, 2015
    Messages:
    5
    Likes Received:
    0
    The mechanism that it uses to solve that problem is generic in that it can work with any shared storage backend (DRBD, NFS, Ceph, Gluster, iSCSI, etc)? Does it intercept I/O requests to the shared storage or how does it work?
     
  8. pixel

    pixel Member
    Proxmox Subscriber

    Joined:
    Aug 6, 2014
    Messages:
    136
    Likes Received:
    2
    #48 pixel, Jun 26, 2015
    Last edited: Jun 27, 2015
  9. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,432
    Likes Received:
    299
    It uses a watchdog device to reboot nodes.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. haxxa

    haxxa New Member

    Joined:
    Jun 26, 2015
    Messages:
    29
    Likes Received:
    1
    Is it still recommended to use a 32bit OS in an LXC container in terms of resources?
     
  11. ivix

    ivix New Member

    Joined:
    Feb 1, 2012
    Messages:
    13
    Likes Received:
    0
    It is possible to place LXC container on Ceph RBD storage ?
     
  12. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,432
    Likes Received:
    299
    we plan to implement that soon.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  13. sengo

    sengo New Member

    Joined:
    Feb 19, 2009
    Messages:
    27
    Likes Received:
    0
    I see in the docs that the minimum number of nodes for an ha confiuration is 3. Is a 2 node cluster no longer supported?
     
  14. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,432
    Likes Received:
    299
    No, that is not supported.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  15. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,447
    Likes Received:
    386
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  16. sengo

    sengo New Member

    Joined:
    Feb 19, 2009
    Messages:
    27
    Likes Received:
    0
    What a pity. Two node cluster are the most common configuration for us...
     
  17. sengo

    sengo New Member

    Joined:
    Feb 19, 2009
    Messages:
    27
    Likes Received:
    0
    Not recommended is a thing. Unsupported and impossible to enable is another...
     
  18. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    4,598
    Likes Received:
    306
    How I wrote before Corosync has no quorumdisk anymore. That the point.
    And it is not impossible but not reliable. Or where is the decision breaker when you have a 2 node cluster split?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    #58 wolfgang, Jun 26, 2015
    Last edited: Jun 26, 2015
  19. sengo

    sengo New Member

    Joined:
    Feb 19, 2009
    Messages:
    27
    Likes Received:
    0
    Do you mean using:quorum {provider: corosync_votequorumtwo_node: 1}I'm no cluster expert, but how does other softwares deal with 2 node clusters (eg VMware) without fencing and quorum disks?
     
  20. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    4,598
    Likes Received:
    306
    I don't know what other softwares (eg VMware) do.
    But I know one thing for sure it is not possible to make a reliable cluster with ha with less then 3 decision makers.
    This can be different implemented eg shared Storage.
    see https://en.wikipedia.org/wiki/Byzantine_fault_tolerance
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice