Qdevice

cpzengel

Renowned Member
Nov 12, 2015
221
27
93
Aschaffenburg, Germany
zfs.rocks
Hi,

i´d like to add a QDevice to my two machine Cluster.
I am failing in installing it on another single pve v6 because of the binary names have changed.
So I tried it in a Debian 9 Container, but its Still Corosync V2

What´s the best Practice? Any Documentation?

Cheers

Chriz
 
i´ve missed a package on node

is this ok so far?

Code:
Quorum information
------------------
Date:             Wed Nov  6 18:55:02 2019
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000001
Ring ID:          1.4e6dac
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      2
Quorum:           2 
Flags:            Quorate Qdevice

Membership information
----------------------
    Nodeid      Votes    Qdevice Name
0x00000001          1   A,NV,NMW 192.168.50.221 (local)
0x00000002          1         NR 192.168.50.223
0x00000000          0            Qdevice (votes 1)


Code:
Quorum information
------------------
Date:             Wed Nov  6 18:55:13 2019
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000002
Ring ID:          1/5139884
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      2
Quorum:           2 
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes    Qdevice Name
0x00000001          1   A,NV,NMW 192.168.50.221
0x00000002          1         NR 192.168.50.223 (local)
0x00000000          0            Qdevice (votes 1)
root@pve2:~#
 
Code:
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
        (if you think this is a mistake, you may want to use -f option)


INFO: initializing qnetd server
Creating /etc/corosync/qnetd/nssdb
Creating new key and cert db
password file contains no data
Creating new noise file /etc/corosync/qnetd/nssdb/noise.txt
Creating new CA


Generating key.  This may take a few moments...

Is this a CA certificate [y/N]?
Enter the path length constraint, enter to skip [<0 for unlimited path]: > Is this a critical extension [y/N]?


Generating key.  This may take a few moments...

Notice: Trust flag u is set automatically if the private key is present.
QNetd CA certificate is exported as /etc/corosync/qnetd/nssdb/qnetd-cacert.crt

INFO: copying CA cert and initializing on all nodes

node 'pve1': Creating /etc/corosync/qdevice/net/nssdb
password file contains no data
node 'pve1': Creating new key and cert db
node 'pve1': Creating new noise file /etc/corosync/qdevice/net/nssdb/noise.txt
node 'pve1': Importing CA
node 'pve2': Creating /etc/corosync/qdevice/net/nssdb
password file contains no data
node 'pve2': Creating new key and cert db
node 'pve2': Creating new noise file /etc/corosync/qdevice/net/nssdb/noise.txt
node 'pve2': Importing CA
INFO: generating cert request
Creating new certificate request


Generating key.  This may take a few moments...

Certificate request stored in /etc/corosync/qdevice/net/nssdb/qdevice-net-node.crq

INFO: copying exported cert request to qnetd server

INFO: sign and export cluster cert
Signing cluster certificate
Certificate stored in /etc/corosync/qnetd/nssdb/cluster-sysops-rz.crt

INFO: copy exported CRT

INFO: import certificate
Importing signed cluster certificate
Notice: Trust flag u is set automatically if the private key is present.
pk12util: PKCS12 EXPORT SUCCESSFUL
Certificate stored in /etc/corosync/qdevice/net/nssdb/qdevice-net-node.p12

INFO: copy and import pk12 cert to all nodes

node 'pve1': Importing cluster certificate and key
node 'pve1': pk12util: PKCS12 IMPORT SUCCESSFUL
node 'pve2': Importing cluster certificate and key
node 'pve2': pk12util: PKCS12 IMPORT SUCCESSFUL
INFO: add QDevice to cluster configuration

INFO: start and enable corosync qdevice daemon on node 'pve1'...
Synchronizing state of corosync-qdevice.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable corosync-qdevice
update-rc.d: error: corosync-qdevice Default-Start contains no runlevels, aborting.
command 'ssh -o 'BatchMode=yes' -lroot 192.168.50.221 systemctl enable corosync-qdevice' failed: exit code 1
 
-- The process' exit code is 'exited' and its exit status is 1.

Nov 06 19:04:58 pve9 systemd[1]: corosync-qnetd.service: Failed with result 'exit-code'.

-- Subject: Unit failed

-- Defined-By: systemd

-- Support: https://www.debian.org/support

--

-- The unit corosync-qnetd.service has entered the 'failed' state with result 'exit-code'.

Nov 06 19:04:58 pve9 systemd[1]: Failed to start Corosync Qdevice Network daemon.

-- Subject: A start job for unit corosync-qnetd.service has failed

-- Defined-By: systemd
 
So I tried it in a Debian 9 Container, but its Still Corosync V2

Either use our stretch repo to pull our corosync, or use a Debian 10 CT.
The Debian 9 corosync packages have some issues regarding initscripts, IIRC.
 
its running on a pve6 non cluster, so i have to setup a own startscript?

Hmm, normally not.. Now, after some thinking I remember the external qdevice was split out for corosync 3, and we currently do not package it ourself (as it's separate thing and optional) so you'll get the one from Debian.. Need to re-check that package tomorrow (to late today sorry :) )
 
Last edited:
One Node fine, the other that mess. Any Idea?

Code:
Quorum information
------------------
Date:             Wed Nov  6 23:22:41 2019
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000002
Ring ID:          1/5139916
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      2
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes    Qdevice Name
0x00000001          1    A,V,NMW 192.168.50.221
0x00000002          1         NR 192.168.50.223 (local)
0x00000000          0            Qdevice (votes 1)
 
Hi,
Does a QDevice work fine with a 6node Proxmox VM cluster? all nodes is on V7.2

or is the below still a limitations from the manual? Or is the below only true for Uneven number clusters?

  • If the QNet daemon itself fails, no other node may fail or the cluster immediately loses quorum. For example, in a cluster with 15 nodes, 7 could fail before the cluster becomes inquorate. But, if a QDevice is configured here and it itself fails, no single node of the 15 may fail. The QDevice acts almost as a single point of failure in this case.
 
Does a QDevice work fine with a 6node Proxmox VM cluster? all nodes is on V7.2
Yes.
or is the below still a limitations from the manual? Or is the below only true for Uneven number clusters?
The "Supported Setups" section starts with the IMO pretty clear:
We support QDevices for clusters with an even number of nodes and recommend it for 2 node clusters, if they should provide higher availability. For clusters with an odd node count, we currently discourage the use of QDevices.
Following with reasoning and context for why we discourage using it on clusters with odd-numbered node counts.
 
Will it allow 3 nodes being down?
So will a Qdevice allow 1 Datacentre to go offine and still survive ?
As also described in the docs: yes, as long as the QDevice is hosted somewhere independent of the Proxmox VE cluster it gives a vote too.

But note the following QDevice independent points that are very important:
  • the storage that the virtual guests are using needs to be able to handle this too.
    Ceph, for example has its own cluster quorum provided by the Ceph monitors, not Proxmox VE. So there you would need something like a monitor on each datacenter site and an external monitor for tie breaking.
  • you need to have enough memory and CPU resources to keep your whole workload running on only three nodes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!