What are the best practices for using Windows as a Q-Device arbiter in a two-node PVE cluster?

Oct 14, 2025
26
4
3
Hi everyone, I currently have two PVE nodes and want to set up HA. I know that in a two-node setup, losing one node means losing quorum, which stops the services, so I need a third vote.

My plan is to install a simplified PVE node as a VM on my Windows PC (running 24/7) using Hyper-V, and join it to the existing cluster as an arbiter. Is this "virtual PVE node" approach feasible, or is there a more recommended best practice?
 
As noted above - a QDevice is just an external vote provided by that server. You can use the corosync-qnetd package to set this up on the external server.

For details on implementation - see the official Proxmox notes here.

Good luck.
 
VM on my Windows PC (running 24/7) using Hyper-V, and join it to the existing cluster as an arbiter. Is this "virtual PVE node" approach feasible,
Probably. As @SteveITS already said: a plain and minimal instance of Debian is sufficient.

or is there a more recommended best practice?
Well..., there may be different opinions about this one. Personally I do not like Windows systems to be a required part of my infrastructure.

Two options I would prefer are: if you have an independent NAS running 24*7 and it can run Containers: run the QDev there. If you have a Raspberry Pi or any other SingleBoardComputer with low energy costs and the capability to run Debian: choose this one :-)

The QDev runs fine with a very small amount of CPU compute power and the network does not require the low latency of the main Corosync-Rings.
 
Thank you all for the valuable advice and suggestions.


Two options I would prefer are: if you have an independent NAS running 24*7 and it can run Containers: run the QDev there. If you have a Raspberry Pi or any other SingleBoardComputer with low energy costs and the capability to run Debian: choose this one :-)

Regarding the environment I am currently configuring, I don't have a NAS capable of running containers, nor do I have a spare single-board computer (like a Raspberry Pi) available to install Debian. This is why I was considering using my existing Windows Server 2019, which is already running 24/7, to host the arbiter.


As noted above - a QDevice is just an external vote provided by that server. You can use the corosync-qnetd package to set this up on the external server.

For details on implementation - see the official Proxmox notes here.

Good luck.

I completely agree that installing a minimal Debian instance with corosync-qnetd is the official and most lightweight approach. However, I’ve noticed a specific behavior with the Q-device: it seems that when using qnetd, the status of the arbiter is not visible in the Proxmox WebUI. To monitor its health or current voting status, I have to manually run pvecm status in the shell.

In contrast, if I set up a "mini" PVE node (even as a VM) and join it to the cluster, its status is immediately visible and manageable directly from the cluster view in the WebUI.

Given that my hardware resources are not particularly tight and the local network speed is fast enough, I am leaning towards using a virtualized PVE node instead of a standard Q-device for better visibility. Are there any hidden risks or potential downsides to using a full (but minimal) PVE node as an arbiter in this manner that I should be aware of?

Looking forward to your thoughts!
 
I completely agree that installing a minimal Debian instance with corosync-qnetd is the official and most lightweight approach. However, I’ve noticed a specific behavior with the Q-device: it seems that when using qnetd, the status of the arbiter is not visible in the Proxmox WebUI. To monitor its health or current voting status, I have to manually run pvecm status in the shell.
Monitoring should be automatically done in the background from another system so that you have proper monitoring and alerting. This can be automated via pvecm if you want to do this, but don't manually look for stuff.

In contrast, if I set up a "mini" PVE node (even as a VM) and join it to the cluster, its status is immediately visible and manageable directly from the cluster view in the WebUI.

Given that my hardware resources are not particularly tight and the local network speed is fast enough, I am leaning towards using a virtualized PVE node instead of a standard Q-device for better visibility. Are there any hidden risks or potential downsides to using a full (but minimal) PVE node as an arbiter in this manner that I should be aware of?
That should work, too. You should disable any usable storage on that virtualized node in order to not migrate something by accident.

What is your plan for storing the VMs? I assume ZFS with replication, as you don't have a NAS for storing.
 
Are there any hidden risks or potential downsides to using a full (but minimal) PVE node as an arbiter in this manner that I should be aware of?
Just create that scenario, it will probably work. Please report back your experience here :)

What I would do (if I would be forced to use a VM on Windows for this) is to make sure I give it a separate / independent NIC. I want the PVE node to have access to all VLANs/networks of the cluster while avoiding this for the Windows host. I do not want the VM to interfere with the Windows settings and vice versa.

Good luck. And have fun! (No irony!)
 
I still think that a QDevice is the way to go. Setting up another PVE VM (in Windows) just for the sake of GUI-visibility seems to me completely unnecessary and wasteful. Add to that, the NW element inter-cluster is more complex & generally heavier than a simple QDevice which runs simply as TCP/IP. If you really want a third node - do exactly that: BM.

In general, I do agree, that it would be nice to have some GUI-visibility of a QDevice.
Maybe add your opinion to this forum thread/bug/feature request for such a feature.
 
If you install PVE on the third node/VM then why not join it to the cluster instead of making it a Qdevice? It just seems like a lot of unnecessary stuff that you won't be using, if you aren't joining it (thus no visibility), and if you are joining it then it doesn't need to also be a Qdevice.
 
  • Like
Reactions: pulipulichen
What is your plan for storing the VMs? I assume ZFS with replication, as you don't have a NAS for storing.

Actually, I am using a SAN with Fibre Channel (FC) connected to the servers.

So while I do have shared storage, I don't have a NAS capable of hosting Containers (LXC) to run a QDevice directly. It's a bit of a pity, which is why I'm considering the virtualized PVE node approach.
 
I still think that a QDevice is the way to go. Setting up another PVE VM (in Windows) just for the sake of GUI-visibility seems to me completely unnecessary and wasteful. Add to that, the NW element inter-cluster is more complex & generally heavier than a simple QDevice which runs simply as TCP/IP. If you really want a third node - do exactly that: BM.
Thanks for the explanation. Regarding the suggestion for a Bare Metal node, I completely agree that it would be the ideal setup if the hardware were available.

However, the current environmental constraint is that I only have two physical nodes dedicated to the PVE cluster (connected via FC to a shared SAN) and one Windows machine that is already running other services. That's why I was wondering if running a PVE VM on that Windows machine would actually work and what problems I might run into.

I believe everyone’s suggestions are very reasonable and provide great perspectives. I really appreciate the active discussion and all your helpful advice!
 
If you install PVE on the third node/VM then why not join it to the cluster instead of making it a Qdevice? It just seems like a lot of unnecessary stuff that you won't be using, if you aren't joining it (thus no visibility), and if you are joining it then it doesn't need to also be a Qdevice.
I would like to clarify my previous post as I realize I may have caused some confusion regarding my plan.

If I proceed with setting up a "mini PVE" node (as a VM on my Windows machine), my intention is indeed to join it directly to the existing two-node cluster as a full member. This way, it provides the third vote needed for quorum to ensure HA mechanisms work correctly.

Under this setup, I will not be installing corosync-qnetd or configuring it as a Q-Device; it will simply be a standard (though lightweight) member of the cluster.

This is my current plan. Are there any potential issues or "gotchas" with this approach that I might be overlooking? For instance, are there specific concerns regarding running a cluster member as a VM on a non-PVE hypervisor (Hyper-V) in the long run?

Thanks for your advice!
 
What I would do (if I would be forced to use a VM on Windows for this) is to make sure I give it a separate / independent NIC. I want the PVE node to have access to all VLANs/networks of the cluster while avoiding this for the Windows host. I do not want the VM to interfere with the Windows settings and vice versa.
I've also been considering the possibility of using WSL (Windows Subsystem for Linux) directly as the Q-Device arbiter. Since I'm not particularly familiar with the nuances of WSL, I'm curious if its networking would face the same challenge you mentioned—effectively sharing the same NIC/network stack with the Windows host?

Initially, I was mainly thinking about how to ensure WSL starts automatically upon Windows boot, but it seems that managing the networking and isolation might be the more "tricky" hurdle to overcome here. Does WSL's default virtual networking create too much interference for a stable Corosync connection?
 
This is my current plan. Are there any potential issues or "gotchas" with this approach that I might be overlooking? For instance, are there specific concerns regarding running a cluster member as a VM on a non-PVE hypervisor (Hyper-V) in the long run?
One obvious potential issue are the network requirements of corosync:
The Proxmox VE cluster stack requires a reliable network with latencies under 5 milliseconds (LAN performance) between all nodes to operate stably. While on setups with a small node count a network with higher latencies may work, this is not guaranteed and gets rather unlikely with more than three nodes and latencies above around 10 ms.
The network should not be used heavily by other members, as while corosync does not uses much bandwidth it is sensitive to latency jitters; ideally corosync runs on its own physically separated network. Especially do not use a shared network for corosync and storage (except as a potential low-priority fallback in a redundant configuration).
Before setting up a cluster, it is good practice to check if the network is fit for that purpose. To ensure that the nodes can connect to each other on the cluster network, you can test the connectivity between them with the ping tool.
If the Proxmox VE firewall is enabled, ACCEPT rules for corosync will automatically be generated - no manual action is required.

https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_cluster_network

So your Windowsserver would need to have a dedicated network card which is only used by your PVE VM for the cluster communication.

In contrast the qdevice doesn't need it:
Unlike corosync itself, a QDevice connects to the cluster over TCP/IP. The daemon can also run outside the LAN of the cluster and isn’t limited to the low latencies requirements of corosync.

https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support

I don't see any benefit in adding a VM under Hyper-V as third node to your cluster, go with a lightweight qdevice vm.
It would also have the benefit that you won't need much RAM for the qdevice. I currently host the qdevice for my lab cluster on a vm on my NAS. That VM has around 512 MB RAM and a rather small system disk. A PVE node needs at least 2 GB RAM: https://www.proxmox.com/en/products/proxmox-virtual-environment/requirements
 
Last edited:
... using WSL (Windows Subsystem for Linux) directly as the Q-Device arbiter. Since I'm not particularly familiar with the nuances of WSL, I'm curious if its networking would face the same challenge you mentioned
No. My hint regarding a separate NIC was for installing a PVE cluster member. A pure QDev has much simpler requirements ;-)
 
I don't see any benefit in adding a VM under Hyper-V as third node to your cluster, go with a lightweight qdevice vm.
It would also have the benefit that you won't need much RAM for the qdevice. I currently host the qdevice for my lab cluster on a vm on my NAS. That VM has around 512 MB RAM and a rather small system disk. A PVE node needs at least 2 GB RAM: https://www.proxmox.com/en/products/proxmox-virtual-environment/requirements
Thank you for the heads-up. I will definitely take the resource requirements into consideration for my evaluation.
 
One obvious potential issue are the network requirements of corosync:


So your Windowsserver would need to have a dedicated network card which is only used by your PVE VM for the cluster communication.

In contrast the qdevice doesn't need it:
This is precisely the point I am a bit concerned about. It appears that the general consensus is that the barrier to entry (in terms of network resources) for adding a full PVE host as a cluster member is significantly higher than that for setting up a Q-Device. However, this is where I am getting a bit confused.

While a Q-Device setup involves installing corosync-qnetd and both approaches rely on Corosync for cluster logic, what are the actual differences in their requirements for network latency and stability?

https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support

According to the Proxmox Wiki on Cluster Manager, a full Cluster Member requires a dedicated 1G NIC and a very low-latency network to function reliably.

Based on the suggestions here, a Q-Device is clearly more lenient, but what would be considered the "minimum" network requirements or the maximum acceptable latency for a Q-Device to remain a reliable arbiter? If the requirements are significantly lower, does it still strictly necessitate a dedicated physical NIC on the Windows side, or is a shared stable connection typically sufficient?