[SOLVED] 'pvecm qdevice setup' fails

This is also hapenning with me. I've run the "solution" on all nodes, but it still fails.
Yes, I can ssh root@x.x.x.x to the different nodes. Same issues as SheridansNL.
I'm in the same boat as FleetwoodMac here.

Tried The Chain of solutions in this thread and others, I guess this project will be in my Dreams for now. I'm Never Going Back Again until I can find a new solution to try, I think I've tried what I found Everywhere. If anyone has any Second Hand News about getting their setup working please let me know, but please don't tell me any Little Lies.
 
  • Like
Reactions: FleetwoodMac
I had to build my own version of qnetd for an platform that it hasn't been built by others for, and had to disable tls to get it to build. As such I don't want/need certificates to be used for this and suspect therefore a manual set up would be best.

Do we have instructions available for such a manual set up?

EDIT:

I successfully set up the QDevice without certificates by editing /usr/share/perl5/PVE/CLI/pvecm.pm to skip the certificate management stuff, while setting tls in the corosync config generation to off. The resulting change in config in my corosync.conf is in the quorum block:

JSON:
quorum {
  device {
    model: net
    net {
      algorithm: ffsplit
      host: x.x.x.x
      tls: off
    }
    votes: 1
  }
  provider: corosync_votequorum
}

Everything appears to be working okay.
 
Last edited:
I had to build my own version of qnetd for an platform that it hasn't been built by others for, and had to disable tls to get it to build. As such I don't want/need certificates to be used for this and suspect therefore a manual set up would be best.

Do we have instructions available for such a manual set up?

EDIT:

I successfully set up the QDevice without certificates by editing /usr/share/perl5/PVE/CLI/pvecm.pm to skip the certificate management stuff, while setting tls in the corosync config generation to off. The resulting change in config in my corosync.conf is in the quorum block:

JSON:
quorum {
  device {
    model: net
    net {
      algorithm: ffsplit
      host: x.x.x.x
      tls: off
    }
    votes: 1
  }
  provider: corosync_votequorum
}

Everything appears to be working okay.

Can you share what you have edited in the pvecm.pm file? We are currently having this issue aswell.
 
That wasn't my own issue, and I'm a bit reluctant to share a patch file since I wouldn't recommend it myself.

All the script does is add the bit to the corosync config and restart qdevice on each node. If I need to do this again I'll just do those two steps.

Otherwise it's a bit obvious which lines you want to skip.
 
You can apply the patch manually if its urgent for you:

Create a file with the following context in /usr/share/perl5/PVE/CLI/
Code:
--- a/src/PVE/CLI/pvecm.pm
+++ b/src/PVE/CLI/pvecm.pm
@@ -18,6 +18,7 @@ use PVE::PTY;
 use PVE::API2::ClusterConfig;
 use PVE::Corosync;
 use PVE::Cluster::Setup;
+use PVE::SSHInfo;
 
 use base qw(PVE::CLIHandler);
 
@@ -173,9 +174,10 @@ __PACKAGE__->register_method ({
     run_command([@$scp_cmd, "root\@\[$qnetd_addr\]:$ca_export_file", "/etc/pve/$ca_export_base"]);
     $foreach_member->(sub {
         my ($node, $ip) = @_;
+        my $ssh_options = PVE::SSHInfo::ssh_info_to_ssh_opts ({ ip => $ip, name => $node });
         my $outsub = sub { print "\nnode '$node': " . shift };
         run_command(
-        [@$ssh_cmd, $ip, $qdevice_certutil, "-i", "-c", "/etc/pve/$ca_export_base"],
+        [@$ssh_cmd, @$ssh_options, $ip, $qdevice_certutil, "-i", "-c", "/etc/pve/$ca_export_base"],
         noerr => 1, outfunc => \&$outsub
         );
     });
@@ -206,9 +208,10 @@ __PACKAGE__->register_method ({
     run_command([@$scp_cmd, "$db_dir_node/$p12_file_base", "/etc/pve/"]);
     $foreach_member->(sub {
         my ($node, $ip) = @_;
+        my $ssh_options = PVE::SSHInfo::ssh_info_to_ssh_opts ({ ip => $ip, name => $node });
         my $outsub = sub { print "\nnode '$node': " . shift };
         run_command([
-            @$ssh_cmd, $ip, "$qdevice_certutil", "-m", "-c",
+            @$ssh_cmd, @$ssh_options, $ip, "$qdevice_certutil", "-m", "-c",
             "/etc/pve/$p12_file_base"], outfunc => \&$outsub
         );
     });
@@ -243,10 +246,11 @@ __PACKAGE__->register_method ({
 
     $foreach_member->(sub {
         my ($node, $ip) = @_;
+        my $ssh_options = PVE::SSHInfo::ssh_info_to_ssh_opts ({ ip => $ip, name => $node });
         my $outsub = sub { print "\nnode '$node': " . shift };
         print "\nINFO: start and enable corosync qdevice daemon on node '$node'...\n";
-        run_command([@$ssh_cmd, $ip, 'systemctl', 'start', 'corosync-qdevice'], outfunc => \&$outsub);
-        run_command([@$ssh_cmd, $ip, 'systemctl', 'enable', 'corosync-qdevice'], outfunc => \&$outsub);
+        run_command([@$ssh_cmd, @$ssh_options, $ip, 'systemctl', 'start', 'corosync-qdevice'], outfunc => \&$outsub);
+        run_command([@$ssh_cmd, @$ssh_options, $ip, 'systemctl', 'enable', 'corosync-qdevice'], outfunc => \&$outsub);
     });
 
     run_command(['corosync-cfgtool', '-R']); # do cluster wide config reload
@@ -291,8 +295,9 @@ __PACKAGE__->register_method ({
         # cleanup qdev state (cert storage)
         my $qdev_state_dir =  "/etc/corosync/qdevice";
         $foreach_member->(sub {
-        my (undef, $ip) = @_;
-        run_command([@$ssh_cmd, $ip, '--', 'rm', '-rf', $qdev_state_dir]);
+        my ($node, $ip) = @_;
+        my $ssh_options = PVE::SSHInfo::ssh_info_to_ssh_opts ({ ip => $ip, name => $node });
+        run_command([@$ssh_cmd, @$ssh_options, $ip, '--', 'rm', '-rf', $qdev_state_dir]);
         });
     };
 
@@ -300,9 +305,10 @@ __PACKAGE__->register_method ({
     die $@ if $@;
 
     $foreach_member->(sub {
-        my (undef, $ip) = @_;
-        run_command([@$ssh_cmd, $ip, 'systemctl', 'stop', 'corosync-qdevice']);
-        run_command([@$ssh_cmd, $ip, 'systemctl', 'disable', 'corosync-qdevice']);
+        my ($node, $ip) = @_;
+        my $ssh_options = PVE::SSHInfo::ssh_info_to_ssh_opts ({ ip => $ip, name => $node });
+        run_command([@$ssh_cmd, @$ssh_options, $ip, 'systemctl', 'stop', 'corosync-qdevice']);
+        run_command([@$ssh_cmd, @$ssh_options, $ip, 'systemctl', 'disable', 'corosync-qdevice']);
     });
 
     run_command(['corosync-cfgtool', '-R']);

Apply using this as an example: patch -b -p1 --dry-run < dfile.patch


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
That worked, thank you! I was able to patch the file and add the qdevice.

A few more details for others peace of mind, since I've never patched like this before:
  • I needed to install patch with apt update && apt install patch -y (doing this on one host will suffice).
  • Created file containing the diff from https://lists.proxmox.com/pipermail/pve-devel/2024-May/063867.html, so it looked like what's in @bbgeek17's spoiler sans the emoji :p.
  • Ran the patch command pointing to absolute path of the dfile.patch, like this: patch -b -p1 --dry-run < /usr/share/perl5/PVE/CLI/dfile.patch however that will complain about not being able to find the file to patch, which is a bit confusing:
    Bash:
    can't find file to patch at input line 3
    Perhaps you used the wrong -p or --strip option?
    The text leading up to this was:
    --------------------------
    |--- a/src/PVE/CLI/pvecm.pm
    |+++ b/src/PVE/CLI/pvecm.pm
    --------------------------
    File to patch:
    to which you simply specify the path to pvecm.pm which is located here: /usr/share/perl5/PVE/CLI/pvecm.pm. Obviously run it without the --dry-run flag when you're ready to patch for real.

Then I was able to successfully add my qdevice with pvecm qdevice setup <QDEVICE_IP> --force.
 
Last edited:
That worked, thank you! I was able to patch the file and add the qdevice.

A few more details for others peace of mind, since I've never patched like this before:
  • I needed to install patch with apt update && apt install patch -y (doing this on one host will suffice).
  • Created file containing the diff from https://lists.proxmox.com/pipermail/pve-devel/2024-May/063867.html, so it looked like what's in @bbgeek17's spoiler sans the emoji :p.
  • Ran the patch command pointing to absolute path of the dfile.patch, like this: patch -b -p1 --dry-run < /usr/share/perl5/PVE/CLI/dfile.patch however that will complain about not being able to find the file to patch, which is a bit confusing:
    Bash:
    can't find file to patch at input line 3
    Perhaps you used the wrong -p or --strip option?
    The text leading up to this was:
    --------------------------
    |--- a/src/PVE/CLI/pvecm.pm
    |+++ b/src/PVE/CLI/pvecm.pm
    --------------------------
    File to patch:
    to which you simply specify the path to pvecm.pm which is located here: /usr/share/perl5/PVE/CLI/pvecm.pm. Obviously run it without the --dry-run flag when you're ready to patch for real.

Then I was able to successfully add my qdevice with pvecm qdevice setup <QDEVICE_IP> --force.

Worked for me. Thanks!
 
Following these steps always results in an error for me:

Code:
patch: **** malformed patch at line 4: use PVE::API2::ClusterConfig;

So far not able to figure out what is causing this...

Alternative - is there an approximate date when the official patch would be released?
 
Have you tried setting up manually? The script doesn't do that much and is pretty self explanatory.
 
Following these steps always results in an error for me:

Code:
patch: **** malformed patch at line 4: use PVE::API2::ClusterConfig;

So far not able to figure out what is causing this...

Alternative - is there an approximate date when the official patch would be released?

Did you copy the patch code from @bbgeek17's post with the embedded spoiler section (with the emoji in it) or create it from the diff linked to in @ChrisSlashNull's post? If you copied the patch info with the emoji this may be what is causing the error?

I created it from the link using @ChrisSlashNull's instructions and the patch applied OK and I could finally add my qdevice.

This is the diff code required - not using a 'spolier' embed which seems to create an emoji.

Code:
--- a/src/PVE/CLI/pvecm.pm
+++ b/src/PVE/CLI/pvecm.pm
@@ -18,6 +18,7 @@ use PVE::PTY;
 use PVE::API2::ClusterConfig;
 use PVE::Corosync;
 use PVE::Cluster::Setup;
+use PVE::SSHInfo;
 
 use base qw(PVE::CLIHandler);
 
@@ -173,9 +174,10 @@ __PACKAGE__->register_method ({
     run_command([@$scp_cmd, "root\@\[$qnetd_addr\]:$ca_export_file", "/etc/pve/$ca_export_base"]);
     $foreach_member->(sub {
         my ($node, $ip) = @_;
+        my $ssh_options = PVE::SSHInfo::ssh_info_to_ssh_opts ({ ip => $ip, name => $node });
         my $outsub = sub { print "\nnode '$node': " . shift };
         run_command(
-        [@$ssh_cmd, $ip, $qdevice_certutil, "-i", "-c", "/etc/pve/$ca_export_base"],
+        [@$ssh_cmd, @$ssh_options, $ip, $qdevice_certutil, "-i", "-c", "/etc/pve/$ca_export_base"],
         noerr => 1, outfunc => \&$outsub
         );
     });
@@ -206,9 +208,10 @@ __PACKAGE__->register_method ({
     run_command([@$scp_cmd, "$db_dir_node/$p12_file_base", "/etc/pve/"]);
     $foreach_member->(sub {
         my ($node, $ip) = @_;
+        my $ssh_options = PVE::SSHInfo::ssh_info_to_ssh_opts ({ ip => $ip, name => $node });
         my $outsub = sub { print "\nnode '$node': " . shift };
         run_command([
-            @$ssh_cmd, $ip, "$qdevice_certutil", "-m", "-c",
+            @$ssh_cmd, @$ssh_options, $ip, "$qdevice_certutil", "-m", "-c",
             "/etc/pve/$p12_file_base"], outfunc => \&$outsub
         );
     });
@@ -243,10 +246,11 @@ __PACKAGE__->register_method ({
 
     $foreach_member->(sub {
         my ($node, $ip) = @_;
+        my $ssh_options = PVE::SSHInfo::ssh_info_to_ssh_opts ({ ip => $ip, name => $node });
         my $outsub = sub { print "\nnode '$node': " . shift };
         print "\nINFO: start and enable corosync qdevice daemon on node '$node'...\n";
-        run_command([@$ssh_cmd, $ip, 'systemctl', 'start', 'corosync-qdevice'], outfunc => \&$outsub);
-        run_command([@$ssh_cmd, $ip, 'systemctl', 'enable', 'corosync-qdevice'], outfunc => \&$outsub);
+        run_command([@$ssh_cmd, @$ssh_options, $ip, 'systemctl', 'start', 'corosync-qdevice'], outfunc => \&$outsub);
+        run_command([@$ssh_cmd, @$ssh_options, $ip, 'systemctl', 'enable', 'corosync-qdevice'], outfunc => \&$outsub);
     });
 
     run_command(['corosync-cfgtool', '-R']); # do cluster wide config reload
@@ -291,8 +295,9 @@ __PACKAGE__->register_method ({
         # cleanup qdev state (cert storage)
         my $qdev_state_dir =  "/etc/corosync/qdevice";
         $foreach_member->(sub {
-        my (undef, $ip) = @_;
-        run_command([@$ssh_cmd, $ip, '--', 'rm', '-rf', $qdev_state_dir]);
+        my ($node, $ip) = @_;
+        my $ssh_options = PVE::SSHInfo::ssh_info_to_ssh_opts ({ ip => $ip, name => $node });
+        run_command([@$ssh_cmd, @$ssh_options, $ip, '--', 'rm', '-rf', $qdev_state_dir]);
         });
     };
 
@@ -300,9 +305,10 @@ __PACKAGE__->register_method ({
     die $@ if $@;
 
     $foreach_member->(sub {
-        my (undef, $ip) = @_;
-        run_command([@$ssh_cmd, $ip, 'systemctl', 'stop', 'corosync-qdevice']);
-        run_command([@$ssh_cmd, $ip, 'systemctl', 'disable', 'corosync-qdevice']);
+        my ($node, $ip) = @_;
+        my $ssh_options = PVE::SSHInfo::ssh_info_to_ssh_opts ({ ip => $ip, name => $node });
+        run_command([@$ssh_cmd, @$ssh_options, $ip, 'systemctl', 'stop', 'corosync-qdevice']);
+        run_command([@$ssh_cmd, @$ssh_options, $ip, 'systemctl', 'disable', 'corosync-qdevice']);
     });
 
     run_command(['corosync-cfgtool', '-R']);
 
Did you copy the patch code from @bbgeek17's post with the embedded spoiler section (with the emoji in it) or create it from the diff linked to in @ChrisSlashNull's post? If you copied the patch info with the emoji this may be what is causing the error?
For the record, it was not my intention to sabotage the patch by inserting emoji :) The forum software did it!

i've edited the previous post to get rid of the smiley face


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Elfy
That emoji - well I took care of that for sure ;) even could have sworn I used the original from the source. Tried it again and copied from the forum, didn't work with another error - eventually went to the source link and copied it from there, what worked.

Now I need to figure out if the witness node should show a vote, but I guess that's a different story.

Thanks everyone!
 
  • Like
Reactions: Elfy
Sometimes I run into issues with hidden characters getting pasted in when I paste formatted text into the console. You can quickly remove any of these hidden characters with sed in this Bash command: sed -i -e 's/\r$//g' -e 's/[[:cntrl:]&&[^\n\t]]//g' /path/to/your/script.sh
This will:
  1. Remove all carriage return characters (\r).
  2. Remove all control characters except for newline (\n) and tab (\t).
 
Last edited:
For the record, it was not my intention to sabotage the patch by inserting emoji :) The forum software did it!

i've edited the previous post to get rid of the smiley face


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you.

And for the record I did realise this wasn't your error, but the forum software. I was just so pleased to have found a working solution and wanted to ensure that others might get the benefit of it. :)

The patch worked for me and I've finally got an external Qdevice node with a vote.
 
As always, it will be moved along to the enterprise repository if no issues with the changes pop up, but I don't know when.

If you require the fix, you can temporarily enable the no-subscription repository (e.g in the node's Updates > Repositories UI), run apt update, apt install pve-cluster libpve-cluster-api-perl libpve-cluster-perl and then remove the repository again and run apt update again.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!