Proxmox Cluster no HA only Config

Feb 3, 2022
62
5
13
28
Hello everyone,

maybe this question got asked somewhere but I couldnt find it:
Is it possible to remove the HA functunality and use the cluster only as config cluster?
So that the information about Storage user etc. are shared but no HA so I can turn on/off hosts if needed?

Kind regards,
 
Is fencing of nodes only active when using HA?

Yes.

Turning off 2 out of 3 clustered nodes (but no HA) will leave the the last one running it's vm's as usual ?

Guests keep running, but without quorum, the whole cluster becomes read-only. You can do nothing; afaik, not even login to the webui!

The expectation of a PVE-cluster is to be quorate all the time. If you can not or do not want to guarantee this, do not set up a cluster.

Can you circumvent this with some scuffed workarounds? Yes.
Is it supported? No, by no means!
 
Let say we are using 3 nodes with local storage and without HA. One nic from each node is used for the cluster network traffic and they are connected to a separate switch that reboots due to software update. (Production VM's are connected to other switches which are redundant.)
Will everything keep running except unable to login to the webgui until the switch is back online? We don't want vm's to shutdown or vm disks to become readonly.
(We are trying to figure out if we really need redundant links for cluster traffic if we don't use HA or shared storage.)
 
Last edited:
Let say we are using 3 nodes with local storage and without HA. One nic from each node is used for the cluster network traffic and they are connected to a separate switch that reboots due to software update. (Production VM's are connected to other switches which are redundant.)
Will everything keep running except unable to login to the webgui until the switch is back online? We don't want vm's to shutdown or vm disks to become readonly.
(We are trying to figure out if we really need redundant links for cluster traffic if we don't use HA or shared storage.)

It should, yes, but:
  1. Better get / wait for additional opinions/experiences from others and
  2. Test it on your own in a (virtual) test-environment first and afterwards in a safe, controlled and prepared moment on your production cluster.
But to be honest: You are talking about a production system and that, of course, no interruption is wanted. So, why take the risk at all?
Corosync redundancy: [1] is generally recommended!

[1] https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_redundancy
 
Let say we are using 3 nodes with local storage and without HA. One nic from each node is used for the cluster network traffic and they are connected to a separate switch that reboots due to software update. (Production VM's are connected to other switches which are redundant.)
Will everything keep running except unable to login to the webgui until the switch is back online? We don't want vm's to shutdown or vm disks to become readonly.
(We are trying to figure out if we really need redundant links for cluster traffic if we don't use HA or shared storage.)

the corosync cluster management of proxmox, only manage /etc/pve directory (replication between nodes, with all nodes/vms/storage config).
So it's not impacting running vm or storage.

only /etc/pve is going read only if you loose quorum a node. (and if you have HA enable on 1vm on this node, the node will be fenced/reboot).

you should be able to login, but no start/modify/migrate can be done on the vms when no quorum is available.
 
TL;DR:
Given that you have another network for VM's traffic, simply add a VLAN to that bridge and use it for Corosync Link1. Leave the dedicated network for Corosync Link0. It uses very little bandwidth (i.e. will not impact VMs traffic at all) and you will have redundant networks that will keep your cluster quorate if one of them fails.

Long version:
To properly use a Proxmox cluster, you must use redundant links for corosync, split in different nic's and switches for each one in anything but test clusters, regardless of using or not shared storage or HA.

Remember that as long as you have no quorum you can log in the webUI if using >=7.3 afaik (will not work with previous versions). If you were logged before losing quorum you will still be able to use the webUI. You won't be able to do almost anything besides stopping VMs and LXCs. While you could force quorum with pvecm expected, its risky if you don't know what you are doing and it's implications.

In the example you described, the VM's will keep running while the switch reboots and no downtime should happen.

But now imagine that your cluster switch breaks. You may have a replacement at hand or you may not, and even getting to the location where it is installed might be tricky. As you lost quorum, no operations are allowed in the cluster, not even backups can run. So you use pvecm expected 1 on each of your nodes, to regain operation on each node and run some backups, maybe even do a config change. Then, you connect a switch and nodes start seeing each other again but pmxcfs on each node will merge their changes and replication conflicts may arise... sounds like nightmare to me :)
 
Thanks for clarification all ! These answers should help OP as well. In this particular cluster we want to use the nics for other traffic otherwise we would have gone with redundant paths and we could do a vlan for a second path as VictorSTS said but we wanted to be sure that fencing would not be involved when there is no HA and that only /etc/pve is going read only. Will start with PVE 8.0.3 in this cluster so webgui should then also be availible atleast.

The servers is also close in this case and we can replace the switch if there would be a failure.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!