"Best practise" for a two node proxmox/storage cluster

What is so risky about it that many other are suggesting not to use it yet? And when should be safe to use it? Could it take months?
My personal experience: Three times a destroyed virtual disk (root partition of a small more or less static webserver with not much load). The boot then ended in the GRUB rescue shell because the filesystem had gotten some serious damage. I don't know what others do differently, I just followed the instructions of PVE/Linbit.
I talked to one guy at Linbit, who was a little bit surprised that I planned to go for a productive system with DRBD9 in its then actual state and he wished me good luck, which unfortunately didn't happen ;-)
I read somewhere that they plan to reach final state by the end of June - but I don't find this info anymore.
 
Sorry about that.
So, let's say I give up on having HA and I stick on manual migration with a cluster of two nodes.... can I still have DRDB to sync the local storage? Does it make sense?
 
I have a stupid question.

If you dont need HA, what is the purpose of the shared storage in the first place? What stops you from just running two nodes with two vm storage points?

The clustering "limitation" in requiring 3 nodes is NOT A PROXMOX LIMITATION. Its a logical necessity; you cannot have a meaningful quorum with only two voters. If you're angry with Proxmox not allowing you to try, its because they're trying to protect you from data loss. It is completely possible to have an observer node just for watching quorum and two nodes providing services if you dont wish to add a full fledged cluster node for expense purposes, but the purpose of the clustering is not to save you from needing hardware.

Also, for shared storage, the simplest solution is an external NAS (iSCSI or NFS) but if you insist on doing it "in the box" gluster would work.
 
If you're angry with Proxmox not allowing you to try, its because they're trying to protect you from data loss.
Please don't get me wrong, I'm just trying to figure out how clustering (with proxmox at least) work. I understand that I need at least three nodes to have a reliable cluster, but of course I'm interested into a non expensive setup for home-lab purposes. I know that having a node just for quorum voting in production does not have much sense, but I belive it could be a doable solution for "home users".

If you dont need HA, what is the purpose of the shared storage in the first place? What stops you from just running two nodes with two vm storage points?
I was justr interested in understanding what was the difference from two separate proxmox servers and a cluster with two nodes, each one with its storage. Maybe central management?
 
but of course I'm interested into a non expensive setup for home-lab purposes.

If you're conflating "home use" with "unreliable" I think its safe to say that Proxmox is not catering to you. For the most part, the target audience is looking at Proxmox as a production quality solution. Based on what you're describing, your inexpensive home setup will not benefit from clustering in any meaningful sense. I'll take it one step further and suggest you gain virtually nothing by having multiple hypervisors at all.

If you just want a lab to play with, why not set up 3 proxmox virtual machines for a virtual cluster?
 
If you just want a lab to play with, why not set up 3 proxmox virtual machines for a virtual cluster?
Because I already have a nice hardware to comfortably run Proxmox on it, and actually USE it as hypevisor. So in your opinion, having a 3 nodes solution with one "low resources" node, as the guys before me also suggested, is rubbish?
 
Running a 2+1 cluster is perfectly workable. My thought was that based what I understood you to want to accomplish, there isnt anything to be gained by using 3 machines when 1 would do. You can run the services you want for your normal use plus have a virtual cluster to experiment with.
 
I now happen to nave two spare xeon with 16G ECC RAM, so I figured that I could use both of them, at least for some time just to grasp a good knowledge of proxmox. Or maybe indefinitely, its just 40 watts each...
The point is that I don't seem to find enough comprehensive documentation about this two plus one setup. It seems like you're doing some kind of hack... and this thread is one of the few about this very topic and there seems to be a lot of confusion....
 
The documentation is not materially different then a normal 3 node setup, its just that one node is not meant to be used for hosting resources- its just meant to provide cluster voting services.

In practice, any machine (even a virtual one) running proxmox and has connectivity to the cluster traffic network will serve.
 
In practice, any machine (even a virtual one) running proxmox and has connectivity to the cluster traffic network will serve.

Not really. If you have 2 physical nodes and 1 virtual node running on 1 of the physical nodes for quorum, you lose quorum when the physical node that serves the virtual node have a total system crash. So you still have a SPOF.
 
Not really. If you have 2 physical nodes and 1 virtual node running on 1 of the physical nodes for quorum, you lose quorum when the physical node that serves the virtual node have a total system crash. So you still have a SPOF.

I think what alexskysilk meant was that it would even be possible to run the third (quorum) node as a VM somewhere else, as long as it has reliable (low-latency) access to the cluster network. running the third node as a VM on one of the two "real" nodes of a cluster does not gain you anything, that is of course correct ;)
 
Hey guys,

thanks a lot for your ideas, suggestions and hints!
In the meantime i set up a test cluster looking like this:

Hardware:
2x HP ProLiant DL380 G7 with (each)
- 2x Intel Xeon E5630
- about 50 GB RAM
- 4x onboard 1 GBit NIC
- 2x 10 GBit NIC (Intel X710-DA2)
- 1x 64 GB Consumer SSD
- 1x 480 GB Intel S3510 SSD

On top I installed PVE 4 and DRBD 9 according to the docs in the Proxmox wiki; PVE lives on the 64 GB SSDs, and the 480 GB SSDs are the DRBD storage. So far so easy...
The only special thing I did was adding "two_node: 1" into the quorum section of /etc/pve/corosync.conf.

And I like to mention that I
- haven't configure any HA ressources
- haven't configure "start on boot" -> yes on any VM

Today I did my first crash test ;-) I will describe my results tomorrow.
But please feel free to tell me: What could go wrong with this setup? Hit me! :)

Thanks a lot and many greets
Stephan
 
What could go wrong with this setup? Hit me! :)
you can get a split brain.
in case your nodes have different opinions on something (for instance the content of a file) there is no way to say who is right and who is wrong

also you can never activate ha in such a case, for the same reason. if your nodes disagree there is no way to break the tie
 
Hi,

thank you for your fast reply!

you can get a split brain.
in case your nodes have different opinions on something (for instance the content of a file) there is no way to say who is right and who is wrong
could you give me an example how something like that could happen? Maybe I could resolve such a scenario by hand?
also you can never activate ha in such a case, for the same reason. if your nodes disagree there is no way to break the tie
that's ok for me - as far as I can see, I don't need HA.

Other concerns?

Thanks and greets
Stephan
 
could you give me an example how something like that could happen?
for instance: bit rot, memory erros (unlikely with ecc ram), controller errors, ssd errors, network errors, cosmic radiation, etc
a little more likely is following scenario:

the network fails for a short time-> both nodes act as if they were the only one -> both write something to the disk ->
now the network works again -> split brain
 
While what Dominik suggests SOUNDS far fetched, the very fact that its possible renders the solution undependable. The point of clustering is precisely FOR this purpose- to make your service dependable. Making an undependable cluster defeats the purpose in the first place, you may as well just have one server.

that's ok for me - as far as I can see, I don't need HA.
Why bother with DRBD at all, then? just create local file systems and share via NFS. done.
 
the network fails for a short time-> both nodes act as if they were the only one -> both write something to the disk ->
now the network works again -> split brain

To be clear: What do you exactly mean with "write something to disk". Do you mean VMs writing something to their vDisks? That wouldn't be a problem, because DRBD9 manages this.
Or do you mean something like Proxmox writing something into a config file or such? Well, could you give me a concrete example what could happen?

Thanks and greets
Stephan
 
While what Dominik suggests SOUNDS far fetched, the very fact that its possible renders the solution undependable. The point of clustering is precisely FOR this purpose- to make your service dependable. Making an undependable cluster defeats the purpose in the first place, you may as well just have one server.
I take these scenarios very seriously! My basic question is more like this: What exactly is so bad about split brain? In my eyes split brain means: "Ok, here you have two different data status - please decide which one is the correct one and which you like to throw into /dev/null".
And like I mentioned before: With a two node DRBD9 solution you can't end up with split brain regarding to your vDisks, because they are always primary on the node where the VM runs.

Why bother with DRBD at all, then? just create local file systems and share via NFS. done.
As mentioned I have two main targets:
1. If one node explodes, start its VMs on the second node (manually!), so business can go on while I'm repairing the first node.
2. Having no single point of failure like an NFS storage.

Greets
Stephan
 
Last edited:
What exactly is so bad about split brain?
Good question. A split brain is a condition when the elements of a replicated cluster contain different values. When a split brain occurs neither data set contained by either element is trustworthy because you dont know what chunk of data went to which member correctly following the disruption of communication. This goes back to the odd number requirement for a coherent cluster- if you have two members in agreement and one not, you avoid creating this condition.

With a two node DRBD9 solution you can't end up with split brain regarding to your vDisks, because they are always primary on the node where the VM runs.
Not so. drbd9 is SYNCHRONOUS, which means a completed write involves all elements of the cluster. it is entirely possible that only the remote dataset contains the correct data on a local disk fault, controller fault, memory, etc; This page may explain it better. If you want an asynchronous method, you may want to use zfs + zfs send.

As mentioned I have two main targets:
1. If one node explodes, start its VMs on the second node (manually!), so business can go on while I'm repairing the first node.
2. Having no single point of failure like an NFS storage.

laughs. of course you do. the problem is that to accomplish those goals require a far more complex mechanism then you are accounting for; depending on how much learning you wish to do, are capable of, resources you are capable of bringing to bear, and your tolerance for downtime each set of compromises brings its own risks and pitfalls. Thats what this thread has been trying to bring to your attention. To wit- DRBD with 2 nodes is likely to cause you far more pain then a single iSCSI NAS and two hypervisors. painful experience speaking.
 
Hi,

What's wrong with this setup ? (screenshot)
clusterpxm_anonymized.png

In my case I just want cluster for centralized administration. No HA needed, no VM balancing from node 1 to node 2. VMs are stored on a NFS storage.
Here the pvecm status for additionnal infos
Code:
pvecm status
Quorum information
------------------
Date:  Wed Jul  6 10:44:46 2016
Quorum provider:  corosync_votequorum
Nodes:  2
Node ID:  0x00000001
Ring ID:  24
Quorate:  Yes

Votequorum information
----------------------
Expected votes:  2
Highest expected: 2
Total votes:  2
Quorum:  1
Flags:  2Node Quorate

Membership information
----------------------
  Nodeid  Votes Name
0x00000001  1 X.X.X.Y (local)
0x00000002  1 X.X.X.Z

Apparently it works perfectly so I don't undertand why people say two nodes are bad.

Regards
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!