2 node cluster

applejuice

Member
Oct 13, 2019
7
0
6
62
I just got a new computer that is going to replace my old proxmox box, both running proxmox 7.1. I thought the easiest way to migrate the vm's to the new machine is create a cluster and then migrate the vm's. is there any reason I can't keep the old machine and cluster intact when I am done migrating, thus having a 2 node cluster without worrying about quorum? isn't quorum just for ha? I already migrated one vm and all seems well, but I'm a bit nervous about this uncharted territory. can I shut down the old box in the cluster or just keep it running in "standby mode" and not have problems either way? I appreciate any help for a unix novice.
 
The quorum is for cluster communication, you need odd number, or in smaller cluster size, 3 nodes to function properly. You can set up a quorum device, old pc,rpi etc.
 
Just to expand upon ness1602s answer a bit:

No, a cluster needs to be always quorate to work properly, not just for HA. High availability just means that the cluster will try to keep your HA-enabled VMs and containers always available, i.e. if a cluster node fails the HA-manager will launch HA-managed guests on another cluster node.

While a 2 node cluster should work with 2 active nodes, if one of your nodes goes down your cluster will automatically be non-quorate and will no longer work as expected. To have quorum in your cluster, you need a setup of at least 3 nodes, though you do not need 3 full Proxmox installations, like ness1602 said, you can setup something like a Raspberry Pi as a QDevice for external vote support.

I really recommend you to check out the wiki page about Cluster Management if you haven't already.

One thing that I am still wondering about, did you join your second node to the cluster while it had active guests or did you do something similar as described in Cluster Management - Adding a node? That is, backup the guest and later restore it? Just asking, because the former could lead to conflicting configuration.
 
Last edited:
Just to expand upon ness1602s answer a bit:

No, a cluster needs to be always quorate to work properly, not just for HA. High availability just means that the cluster will try to keep your HA-enabled VMs and containers always available, i.e. if a cluster node fails the HA-manager will launch HA-managed guests on another cluster node.

While a 2 node cluster should work with 2 active nodes, if one of your nodes goes down your cluster will automatically be non-quorate and will no longer work as expected. To have quorum in your cluster, you need a setup of at least 3 nodes, though you do not need 3 full Proxmox installations, like ness1602 said, you can setup something like a Raspberry Pi as a QDevice for external vote support.

I really recommend you to check out the wiki page about Cluster Management if you haven't already.

One thing that I am still wondering about, did you join your second node to the cluster while it had active guests or did you do something similar as described in Cluster Management - Adding a node? That is, backup the guest and later restore it? Just asking, because the former could lead to conflicting configuration.
I ended up creating the cluster on the old node, joining the new machine to it, migrating my vm's. unfortunately, my primary vm, home assistant would give errors that I could not solve, when migrating. I also tried restoring from backup, and that gave different errors. as a test, I restored the home assistant back to the old machine and had no problems. I got home assistant to restore by backing it up using it's own backup / restore to a new vm, and the other 3 vm's worked beautifully. it was pretty cool starting a ping to a vm from my mac, starting a migration, watching the ping while it progressed and completed. one with no pings dropped and one with 2 dropped pings. I then removed the original node from the cluster. I definitely would do it the same way in the future, now that I see the process and limitations.

for some reason, before setting up the cluster and testing the backup / restore option, I couldn't get the new proxmox to log in to my synology cifs share with the same settings and credentials as were working on the original proxmox install. I am pretty sure this is due to my crappy tp-link switch and me using vlans. may do some more experimenting with all of this clustering after my new dlink switch (running a pfSense router with unifi switch and access point network since unifi 8 port switches have been out of stock for quite a while) so have been using my old tplink TL-SG108E as a second switch at my desk, which is probably the source of some of my issues because of the crappy way they do vlans. could have shortened all of that and said network problems, lol, but other people stories on the forums are what got me through all of the issues I had, so I hope my verbose answer helps someone in the future.

btw, creating the cluster and joining the new proxmox node to it, solved the issue I was having to connect to my synology share, because the new node magically had access to it once joined to the node! The more I learn proxmox, the more I LOVE it!

btw, I believe the proxmox documentation instructions for removing a node should include the step deleting the files at /etc/pve/nodes/ to clean up after deleting a node, and information about repairing quorum after deleting a node from a 2 node cluster. I found the answer to that one on the internet, not in the docs (to be fair, maybe I missed it in the docs)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!