Configuration d'un cluster PVE de 2 nœuds

dekou

New Member
Jun 18, 2024
7
0
1
Bonjour,

Je veux mettre en place un une infra PVE avec 2 noeud et avec aussi la replication . Tout cela avec un partage ZFS
est il possible de mettre celà en place sachant la problematique que le qorum pourrait causer? sinon quelle serait la meilleure option dans mon cas.
 
Les clusters à deux nœuds sont problématiques (comme le montreront de nombreux fils de discussion sur ce forum) à moins que vous n'ajoutiez un troisième vote : https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_quorum et https:/ /pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support
Bonjour ,

En effet, le souci est réel.
néanmoins le souci ici , est ce que la réplication ne s'effectuera pas sur le noeud de vote ?
mon souci ici est de faire en sorte que la replication ne se passe seulement entre mes 2 nœuds principaux(dans le cas où j'ajoute un 3eme nœud pour le vote)
 
Il est possible de faire un cluster à 2 noeuds mais il faudra être conscient des implications, notamment :
- live migration possible (manuelle)
- HA si et seulement si un 3ème vote permet d'atteindre le Qorum
- réplication à priori possible mais attention alors si la/les VM sont (manuellement) migrées vers le noeud "secondaire" (pour une maintenance par exemple) ... je n'ai jamais fait ça mais il me semble qu'il faut manuellement inverser le sens de la réplication pour éviter les problèmes de VM corrompues (la réplication périodique "écrasant" la VM tournant sur le noeud secondaire).
 
You can also see a 2 nodes cluster as a single interface to manage the two nodes and have more options than 1 + 1 isolated nodes, like live migration or restoring VM's on the other node if one node abruptly fail ...

EDIT: in general the use of an even number of nodes in any HA or cluster configuration is discouraged to avoid split-brain effects. Thus 4 nodes, 6 nodes, ... should also be avoided.
 
Last edited:
EDIT: in general the use of an even number of nodes in any HA or cluster configuration is discouraged to avoid split-brain effects. Thus 4 nodes, 6 nodes, ... should also be avoided.
True in theory. in practice the chances of the cluster splitting down the middle (so half the nodes only see themselves and not the other half) is so astronomically low it may as well be zero. If this is really a concern for you, you can always set your quorum minimum at 1/2n+1 so you'd get fenced before you get to that condition.
 
True in theory. in practice the chances of the cluster splitting down the middle (so half the nodes only see themselves and not the other half) is so astronomically low it may as well be zero. If this is really a concern for you, you can always set your quorum minimum at 1/2n+1 so you'd get fenced before you get to that condition.
I would not say astronomically low ... although in theory the greater the number, the lower the chance. AFAIK the split brain situation have more chance to appear in a big scale environment where half the nodes are in server room A and other half in server room B, or connected to group of switch C and group of switch D. Even with redundant connections, if the links between them is broken, they can't communicate.

Even big scale company (yeah I mean OVH, remember the fire ...) can have some big impact after a big problem. Thus the odd number to further reduce the "chances" of this type of problem.

But for a small scale 2-3 nodes for labbing, POC's or even some small enterprise (<10 people) this can answer most needs where a full-fledged and best-practices cluster may be too much cost / time / complexity.
 
AFAIK the split brain situation have more chance to appear in a big scale environment where half the nodes are in server room A and other half in server room B
This is not a sane approach. When you have multiple failure domains, the design should account for that- eg, two seperate DCs with a potential to disrupt connectivity should be redundant (and have a outside witness node,) not members of the same failure domain.

And again, even if you insisted on doing so, easily handled with proper quorum rules.
 
This is not a sane approach. When you have multiple failure domains, the design should account for that- eg, two seperate DCs with a potential to disrupt connectivity should be redundant (and have a outside witness node,) not members of the same failure domain.

And again, even if you insisted on doing so, easily handled with proper quorum rules.
I mostly agree with you except that an outside witness node is another form of odd number of nodes and that only in seldom circumstances IT is the only deciding factor and other things must be taken into account, like budget, existing rooms, possible paths for interconnection links, ... thus in the end this is an addition of precautionary measures that can combine.
 
Bonjour
merci pour vos retours, ils m'ont été vraiment bénéfiques.
Néanmoins, j'aimerais savoir si, avec toujours 2 nœuds et sans qdevice, il est possible d'avoir une configuration qui permette de toujours pouvoir avoir un qorum. Je ne sais pas trop , un hack ou truc du genre.
 
technically yes, but consider the consequences- if communication is disrupted between the nodes for ANY reason, EACH ONE would consider itself to be the survivor.
oh okay je vois
mais comment celà se fait concretement ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!