Problems with cluster (adding nodes and QDevices)

patefoniq

Well-Known Member
Jan 7, 2019
65
13
48
Łódź
syslink.pl
Hi,

As some of you have already noticed in other threads, I have huge problems with the cluster and more precisely with adding a node and qdevice for quorum.

Without writing too much, I'll describe my issues.

I migrated from VMware to Proxmox, and that part was successful. I have two separate nodes which I wanted to connect in one cluster as I have done in VMware by vcenter. It's not my first time creating the Proxmox cluster (I've used it since about 2018) so I thought it'd be fast and clear.

My setup:
I have two nodes. One is working as production now and has many of the VMs. I created the cluster on it without any issues. The second one was the non-prod, but now I would like to add it to a cluster and make the rightful member from it. I also have a network storage which I would like to use as a qdevice for the quorum.

Components:
  • 2x PVE 8.1.3
  • 1x OMV as a SAN for VMs' storage, planned as qdevice quorum.

Network configuration:
  • Each one of the planned cluster members has 2 separate network cards: one for VMs' LAN and a second one for SAN (AKA storage network) which was planned for corosync communication and VM storage migration (in VMware it is a storage vmotion).

My problems started with adding a node. I couldn't add one due to some hostname issues. I was using the option "5.4.3. Adding Nodes with Separated Cluster Network" from Proxmox documentation. The hosts files were correct on all nodes, I could ping the hosts by hostname, and I could connect between them via SSH (passwordless by SSH-keys). I always got the below message:

500 Can't connect to ip.ip.ip.ip:8006 (hostname verification failed)

which told me literally nothing, especially when I could ping my hosts via hostnames, and also connect between them via SSH using the hostnames without any issues.

I tried to modify my hosts file in many different ways without any result. After hours of trying, I finally figured out what was the case for this behavior.

The solution was to use the FQDN in the command (for both, cluster and node) instead of the hostname or IP for adding the node to the cluster. I don't know why it worked like that, don't ask me, but it worked for me at last.

As I wrote above, I have been doing this for years and never had such issues.

Ok, let's move on. So, when the command finally worked, here we go again.

I am stuck on the "waiting for quorum..." message and all files in the "/etc/pve" folder on adding node were deleted, which completely ruined my PVE2 configuration by deleting ssh-keys and SSL certs also. And here comes my question also:

Why the hell, you don't create the backup of the "/etc/pve" folder during the adding a node process? I know that you probably expect to do it from admins, but I think that you should have it in your process just in case of failure.

I have one small problem also. While trying to make everything work, I changed vote counts to one. Now, when I try to change it back to 2 votes, I've got the message:

Code:
pvecm expected 2
Unable to set expected votes: CS_ERR_INVALID_PARAM

Code:
Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1
Flags:            Quorate

And the last one -- Qdevice setup.
I prepared everything according to the documentation and ran the commands, one on PVE which acts as a cluster "master" and the second on OMV which should be the qdevice vote. Everything worked fine (added successfully), but my qdevice has no vote:

0x00000000 0 Qdevice (votes 0)

What's wrong with that?

Finally, I would like to achieve one of the below scenarios:

1. Creating the working production cluster with a separate network for corosync and storage migration*.
2. Just returning to two single nodes by rolling back all of my changes.

* here comes the question, is this possible at all - in VMware I had to pay for the "storage vmotion" functionality, but finally it worked. On PVE 6 I've tried to move my VM's between storage, but I'm not sure if it was copied via storage/corosync network. But ok, it's not a case and I won't die without that :)

My last and greatest observation. I think your documentation is lack of technical information. There are no steps in case of disaster and some failures. No points to logs where to find some issues. No instruction on how to roll back some changes or what folders should be backed up for security and the possibility of backing with configuration. Especially when you made a lot of radical changes in your engine.
Please rely here on the documentation of VMware, Oracle, or Red Hat, etc.
 
Last edited:
second one for SAN (AKA storage network) which was planned for corosync communication and VM storage migration (in VMware it is a storage vmotion).
You shouldn't do that. A migration can completely saturate the link, if this is the case your cluster could reboot. You can put Corosync into a network with cluster management, but definitely not with migration.

Did you follow the instructions? https://pve.proxmox.com/wiki/Cluster_Manager
You can also easily create the cluster via the GUI, it's just copy & paste and you're done, the GUI does the rest for you.
 
You shouldn't do that. A migration can completely saturate the link, if this is the case your cluster could reboot. You can put Corosync into a network with cluster management, but definitely not with migration.

Did you follow the instructions? https://pve.proxmox.com/wiki/Cluster_Manager
You can also easily create the cluster via the GUI, it's just copy & paste and you're done, the GUI does the rest for you.
Thank you for the advice, but I'm using a 10 Gbit network as a storage network, so I'm not worried about this part. No, I have been doing it via CLI due to a "separate network" scenario.
 
No, I have been doing it via CLI due to a "separate network" scenario.
Okay, what command did you enter and what error message did you get?

However, there are still instructions on the page mentioned that you should pay attention to or consider in one way or another. So read first, then act - otherwise we will be where we are now.
 
OK.

I tried to do this via web, and I'm in the same situation. Web UI on master node hangs on:

1700912472695.png

The joining node hangs on:

1700912548331.png

[edit]
After a lot of tries, the joining node hangs on:

1700917612057.png

The master node has the same message as previously.

After hours of trying, in my opinion, there is something wrong with PVE services. Maybe nobody paid attention to this because of using cluster since a few versions back. I have a testing environment which I created on v. 5 of PVE and then created the cluster. I upgraded it to version 7 without any issues, but I won't do that to v. 8 because I'm afraid a little.
 
Last edited:
Hi,

As some of you have already noticed in other threads, I have huge problems with the cluster and more precisely with adding a node and qdevice for quorum.

Welcome. ;)

I have two nodes. One is working as production now and has many of the VMs. I created the cluster on it without any issues. The second one was the non-prod, but now I would like to add it to a cluster and make the rightful member from it. I also have a network storage which I would like to use as a qdevice for the quorum.

Before going any further, do you have backups of the VMs off these two in case we destroyed everything?

Components:
  • 2x PVE 8.1.3
  • 1x OMV as a SAN for VMs' storage, planned as qdevice quorum.

Network configuration:
  • Each one of the planned cluster members has 2 separate network cards: one for VMs' LAN and a second one for SAN (AKA storage network) which was planned for corosync communication and VM storage migration (in VMware it is a storage vmotion).

As mentioned before me by @sb-jw I would not run corosync (of the two nodes, the QDevice is fine) on the same where replicas get sent through. How are you sure 10G won't get saturated? Your storage is slower? This might not be an issue now, but once you start running anything HA it will come bite you back.

My problems started with adding a node. I couldn't add one due to some hostname issues. I was using the option "5.4.3. Adding Nodes with Separated Cluster Network" from Proxmox documentation. The hosts files were correct on all nodes, I could ping the hosts by hostname, and I could connect between them via SSH (passwordless by SSH-keys). I always got the below message:

500 Can't connect to ip.ip.ip.ip:8006 (hostname verification failed)

which told me literally nothing, especially when I could ping my hosts via hostnames, and also connect between them via SSH using the hostnames without any issues.

I tried to modify my hosts file in many different ways without any result. After hours of trying, I finally figured out what was the case for this behavior.

The solution was to use the FQDN in the command (for both, cluster and node) instead of the hostname or IP for adding the node to the cluster. I don't know why it worked like that, don't ask me, but it worked for me at last.

Can you post content of /etc/corosync/corosync.conf from each of the two nodes? I know you like to censor IPs, but it would be really nice to see that (supposedly private LAN ranges anyhow). If you absolutely want to censor it, please just find/replace it with another ones consistently.

As I wrote above, I have been doing this for years and never had such issues.

Welcome to 8.1.3. - now this is a joke, ok? Are you sure you want to run the most recent in production?

Ok, let's move on. So, when the command finally worked, here we go again.

I am stuck on the "waiting for quorum..." message and all files in the "/etc/pve" folder on adding node were deleted, which completely ruined my PVE2 configuration by deleting ssh-keys and SSL certs also. And here comes my question also:

The certs and keys can be regenerated (with some side effects hassle), but do your VMs now run all good?

I have one small problem also. While trying to make everything work, I changed vote counts to one. Now, when I try to change it back to 2 votes, I've got the message:

Code:
pvecm expected 2
Unable to set expected votes: CS_ERR_INVALID_PARAM

Code:
Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1
Flags:            Quorate

What does pvecm status say on each node?

And the last one -- Qdevice setup.
I prepared everything according to the documentation and ran the commands, one on PVE which acts as a cluster "master" and the second on OMV which should be the qdevice vote. Everything worked fine (added successfully), but my qdevice has no vote:

0x00000000 0 Qdevice (votes 0)

In which order were you doing this, because above you mentioned you did not get to join the second node to the "master" (the one first converted to cluster scenario, I presume). If that did not work how did you go on adding a third vote member to a (non-working) cluster?

What's wrong with that?

Finally, I would like to achieve one of the below scenarios:

1. Creating the working production cluster with a separate network for corosync and storage migration*.
2. Just returning to two single nodes by rolling back all of my changes.

* here comes the question, is this possible at all - in VMware I had to pay for the "storage vmotion" functionality, but finally it worked. On PVE 6 I've tried to move my VM's between storage, but I'm not sure if it was copied via storage/corosync network. But ok, it's not a case and I won't die without that :)

You should have separate "management" network, yes, that works. You can have separate migration network, but normally people do not put this on the same link with management (corosync). QDevice is an outlier, it can be anywhere, really. You can check in datacenter -> options what is your migration network, that's via which it runs.

My last and greatest observation. I think your documentation is lack of technical information. There are no steps in case of disaster and some failures. No points to logs where to find some issues. No instruction on how to roll back some changes or what folders should be backed up for security and the possibility of backing with configuration. Especially when you made a lot of radical changes in your engine.
Please rely here on the documentation of VMware, Oracle, or Red Hat, etc.

I suspect it might be one of the reasons why there is paid support option. Well, on a weekend not available. Non-European timezone also note. Country foreign to yours (maybe) also not when your public holidays differ. They would tell you there's partners you can look for as well (who match your needs) - those who have access to the same docs, so the only benefit is if they literally got more experience handson.

As for logs, you can check: journalctl -b -u pveproxy -u pvedaemon -u pve-cluster -u corosync

Each node separately, posting it might help us troubleshoot.
 
Last edited:
Before going any further, do you have backups of the VMs off these two in case we destroyed everything?
I make the changes only on the second node, which I try to join the cluster with. Yes, I have backup's. "Master" node works fine.

Your storage is slower? This might not be an issue now, but once you start running anything HA it will come bite you back.
I'll keep that in the back of my mind. For now, we can skip this part.

In which order were you doing this, because above you mentioned you did not get to join the second node to the "master" (the one first converted to cluster scenario, I presume). If that did not work how did you go on adding a third vote member to a (non-working) cluster?
When I got the message waiting for quorum... I tried to make the quorum via qdevice. But we can skip this also for now. I have a cunning plan to install PVE as a VM on "Master-Node" and use it as a vote. It will keep no VMs' and will have minimal RAM and disk.

What does pvecm status say on each node?
I have only one node, and I can't join the next one to an existing cluster.

Code:
Cluster information
-------------------
Name:             cluster_name
Config Version:   11
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Sat Nov 25 15:05:58 2023
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1.5
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 ip.ip.ip.ip (local)

In which order were you doing this, because above you mentioned you did not get to join the second node to the "master" (the one first converted to cluster scenario, I presume). If that did not work how did you go on adding a third vote member to a (non-working) cluster?
Skip this section for now.

As for logs, you can check: journalctl -b -u pveproxy -u pvedaemon -u pve-cluster -u corosync
I see one thing which wonders me:

rx: Packet rejected from ip.of.joining.node:5405

Please, read my second post here, there is more information. BTW I became a master in recovering damaged PVE. I have been doing this almost 100 times today :)

[edit]
Whoa!!
I made it! I had to do it via web UI and using the base network. Furthermore, I also added a qdevice vote, with success.
 
Last edited:
Whoa!!
I made it! I had to do it via web UI and using the base network. Furthermore, I also added a qdevice vote, with success.
Glad to hear that, Piotr!

I think you can go file Bugzilla on your observations how badly documented "recovery" situations are.

Also if you care to make a "tutorial" post here for others what you did, might be helpful for someone in the future.

Note 1: It certainly should not have been necessary via the UI (in fact, it is something I would be less likely to depend on, as the GUI is issuing REST API commands that essentially do the same what the CLI tools do but I find the CLI more verbose when something breaks, also one can be sure it was executed locally).

Note 2
: I had asked for /etc/corosync/corosync.conf and pvecm status at the same time because I noticed in your original post you were trying to do something with editing the hosts file and all that, but you only get to see in the corosync.conf what that network goes by (IPs or names and they do not really "resolve" in the traditional sense as you expect). It's irrelevant now, maybe it comes hand in the future.

Note 3: The QDevice in a VM is completely unnecessary, what you want in such case is just have votes set to 2 for that one "master" node which you can do by editing the corosync.conf file.
 
Just throwing my 2cents in here. In my rather limited experience with and use of cluster management, the process and tools that ProxMox use are problematic at best. I love ProxMox so far, but the clustering portion of it is just....bad. It needs a lot of work to make it easier, and more intuitive and fault tolerent, especially for the casual user.
 
Just throwing my 2cents in here. In my rather limited experience with and use of cluster management, the process and tools that ProxMox use are problematic at best. I love ProxMox so far, but the clustering portion of it is just....bad. It needs a lot of work to make it easier, and more intuitive and fault tolerent, especially for the casual user.
Can you be more specific? What would you like to have changed, what issues has it caused to you? It may deserve a feature request / bug report in Bugzilla, otherwise nothing will be taken out of your post whatsover.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!