the underlying requirements are the same as for any other cluster member: fast communication and low latency.
~# pvecm status
Cluster information
-------------------
Name: pn2
Config Version: 5
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Thu Oct 2 14:23:55 2025
Quorum provider: corosync_votequorum
Nodes: 4
Node ID: 0x00000001
Ring ID: 1.183
Quorate: Yes
Votequorum information
----------------------
Expected votes: 5
Highest expected: 5
Total votes: 5
Quorum: 3
Flags: Quorate Qdevice
Membership information
----------------------
Nodeid Votes Qdevice Name
0x00000001 1 A,V,NMW 10.2.192.21 (local)
0x00000002 1 A,V,NMW 10.2.192.22
0x00000003 1 A,V,NMW 10.2.192.24
0x00000004 1 A,V,NMW 10.2.192.25
0x00000000 1 Qdevice
~# tc qdisc add dev ens18 root netem delay 10ms
To confirm the delay is active just "ping" it from a cluster node:
64 bytes from 10.2.140.141: icmp_seq=85 ttl=64 time=0.182 ms
64 bytes from 10.2.140.141: icmp_seq=86 ttl=64 time=0.251 ms
64 bytes from 10.2.140.141: icmp_seq=87 ttl=64 time=0.310 ms
64 bytes from 10.2.140.141: icmp_seq=88 ttl=64 time=10.3 ms
64 bytes from 10.2.140.141: icmp_seq=89 ttl=64 time=10.4 ms
64 bytes from 10.2.140.141: icmp_seq=90 ttl=64 time=10.3 ms
~# tc qdisc del dev ens18 root; tc qdisc add dev ens18 root netem delay 1000ms
~# date; pvecm status | grep -A 5 Vote
Thu Oct 2 02:34:19 PM CEST 2025
Votequorum information
----------------------
Expected votes: 5
Highest expected: 5
Total votes: 5
Quorum: 3
--
Nodeid Votes Qdevice Name
0x00000001 1 A,V,NMW 10.2.192.21 (local)
0x00000002 1 A,V,NMW 10.2.192.22
0x00000003 1 A,V,NMW 10.2.192.24
0x00000004 1 A,V,NMW 10.2.192.25
0x00000000 1 Qdevice
~# tc qdisc del dev ens18 root; tc qdisc add dev ens18 root netem delay 10000ms
64 bytes from 10.2.140.141: icmp_seq=29 ttl=64 time=10001 ms
64 bytes from 10.2.140.141: icmp_seq=30 ttl=64 time=10001 ms
From 10.2.192.21 icmp_seq=38 Destination Host Unreachable
From 10.2.192.21 icmp_seq=39 Destination Host Unreachable
From 10.2.192.21 icmp_seq=40 Destination Host Unreachable
64 bytes from 10.2.140.141: icmp_seq=31 ttl=64 time=10001 ms
64 bytes from 10.2.140.141: icmp_seq=32 ttl=64 time=10001 ms
64 bytes from 10.2.140.141: icmp_seq=33 ttl=64 time=10000 ms
~# date; pvecm status | grep -A 5 Vote
Thu Oct 2 02:41:20 PM CEST 2025
Votequorum information
----------------------
Expected votes: 5
Highest expected: 5
Total votes: 4
Quorum: 3
--
Nodeid Votes Qdevice Name
0x00000001 1 A,NV,NMW 10.2.192.21 (local)
0x00000002 1 A,NV,NMW 10.2.192.22
0x00000003 1 A,NV,NMW 10.2.192.24
0x00000004 1 A,NV,NMW 10.2.192.25
0x00000000 0 Qdevice (votes 1)
I think you may be testing a somewhat unrelated scenario. The purpose of a qdevice vote is to break ties. It could be slow or unresponsive 364 days a year, but on the one day when your 4-node cluster splits in half (e.g. 2 nodes go down hard), that’s when the qdevice matters.Stepping back to 1000ms brings back the Quorum Device's vote without any additional/manual action.
I was asking the remote question for the QDevice to be something similar to how a 2-node ROBO vSan cluster is with VMware. I am also not sure if a QDevice can be shared with another cluster as it seems the QDevice needs to be a one-to-one ratio. Right now it would seem for best practice that the QDevice be local and deployed on something small like raspberry PI device.I think you may be testing a somewhat unrelated scenario. The purpose of a qdevice vote is to break ties. It could be slow or unresponsive 364 days a year, but on the one day when your 4-node cluster splits in half (e.g. 2 nodes go down hard), that’s when the qdevice matters.
The OP’s original question was whether the qdevice/vote can be remote. Technically, yes, it can. But the more important question is "why" you would do that. If all nodes are in the same datacenter, there’s no benefit to placing the qdevice remotely.
If your cluster is split across two rooms or datacenters, then the qdevice should be placed in a third independent location. Otherwise, it won’t provide effective tie-breaking.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
It’s designed to support multiple clusters and is almost configuration and state freeI am also not sure if a QDevice can be shared with another cluster as it seems the QDevice needs to be a one-to-one ratio
We use essential cookies to make this site work, and optional cookies to enhance your experience.