2 node cluster won't stay clustered

fonefoo

New Member
Nov 5, 2016
3
0
1
40
This is a new install. I'm using 2 Cisco UCS M4 blades (maybe 3 if all goes well).

I performed a default install via the ISO (latest version, 4.3 I believe). Everything goes well on both nodes.

Build cluster on first node, join 2nd node to cluster. Everything holds together still.

It appears that when I make any sort of change one of the cluster nodes shows offline, doing a pvecm status the quorum shows actively blocking. Changes I'm talking about are like trying to create a vm, or adding storage.

If I perform a 'service networking restart' on both nodes that cluster will resume for a time, but it seems like after a few minutes it'll disconnect again.

Note: If I run a node solo one of these blades I don't have any trouble, I can add iscsi storage, build vms, etc. I only begin to have trouble after clustering the nodes.

Any tips on things I should look for here?
 
Is there a method proxmox offers to share storage and allow live migration in case of host failure using just two hosts?
 
Just clustering two nodes shouldn't be a problem AFAIK, HA isn't good in that case. I just made a two node cluster also to have the management a little bit centralised.
One node won't stay online either, I read somewhere that setting the network interface in promiscousmode should help and since then I did that it worked. I used the ompingmethods from the wiki to find out which node hat the problem.

The storagequestion I have, too.
For just two nodes it could be the easiest to make NFS-shares on both nodes so it is possible to migrate and do backups. Of course it is not distributed storage.

For a shared nothing cluster with shared/distributed storage it would be interesting what is the recommended method? Sheepdog is just preview and DRBD9, too. CEPH I think is a bit overkill (for two nodes not usable, even for three nodes a bit overkill).
So what is a good solution for a two node not-HA or three node cluster?
 
Last edited:
Ok so it seems like the architecture is just a little different than what I'm used to or just familiar with.

I primarily manage windows 2012R2 hyper-v clusters. You cluster the hosts, and the storage is in use with one host but shared by all in the cluster.

The solution I'm looking for doesn't have to be a "cluster", I'm just looking to have the VMs migrate to another node should their hosting node fail.

btw, I tried the adding the promiscuous option to my bridge interfaces but it didn't seem to help. cluster eventually dies after a few minutes. (note omping host to host works, so it doesn't appear to be an mcast issue).

In short- if a two node cluster is not idea, what would be the ideal proxmox solution to allow live migration to another known host?
thanks.
 
HA is only available for 3 or mode nodes as you need to have a way to break a tie vote if a node fails. I have a 2 node cluster running but it's more so that I can control 2 nodes under one web GUI. All the stores are on nfs shares and though I can migrate the vm between nodes it is not automatic.
 
What Microsoft does with the HyperV-stuff isn't a bad thing and with the replicamechisms and so on they're doing many things right. Reminds me, that I wanted to play around with storage spaces direct, last time I did that was in TP3 or so, there wasn't everything running smooth.

Eventually the pve-zsync is another option for the two-way-thing.

But as said I am also interested which is the best method to distribute/mirror storage over two nodes the easiest way to make a manual failover.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!