Quorum Node vs Qdevice for 2 node cluster

Rxunique

New Member
Feb 5, 2024
19
0
1
I have been researching this but didn't find an exact post asking this question.

I have 2 full node working in a cluster, want to add HA via Replication, but can't really tell the Pro/Con of Quorum Node vs Qdevice

Quorum Node vs Qdevice

I read somewhere, can't find it now, that Qdevice kinda becomes single point of failure, but also read FAQ on cluster manager that losing Qdevice is just like going back to 2 node cluster

In my setup, I can either virtualise a bare minimum proxmox VM on synology NAS that purely provides quorum runs nothing else. Or i can create a Qdevice docker container on the NAS. Which is better?


Going further one step.

If I go with GlusterFS share storage over the 2 node and NAS, and have NAS run a minimal docker swarm manager node. Is this better setup than Proxmox HA? All my services are docker anyway


Side note, I ruled out ceph due to its high requirement on dedicated network and what I think might be excessive writes to consumer SSD.
 
Last edited:
In my setup, I can either virtualise a bare minimum proxmox VM on synology NAS that purely provides quorum runs nothing else. Or i can create a Qdevice docker container on the NAS. Which is better?
The container, it is lightweight. In the end it does not matter what technology is used.

If I go with GlusterFS share storage over the 2 node and NAS, and have NAS run a minimal docker swarm manager node. Is this better setup than Proxmox HA? All my services are docker anyway
What if your NAS goes down?

Side note, I ruled out ceph due to its high requirement on dedicated network and what I think might be excessive writes to consumer SSD.
Then you should also rule out ZFS due to the same high requirement on storage. Just search the forums for consumer SSD problems ... it's full of them.
 
Thanks for quick reply and suggestion

What if your NAS goes down?
To be frank, I only started looking at GlusterFS, with docker swarm I learned that 3 manager node must be on separate host to be real HA.

So I was hoping to have the PVE manager nodes double as worker node or create separate worker nodes (3+2) on corresponding PVE node.

I was under assumption that GlusterFS can have similar setup, but I might be wrong, not sure if they have quorum on volume level or node level, still researching.


Just search the forums for consumer SSD problems ... it's full of them.
I agree with your point, but reality is enterprise SATA/SAS SSDs are easy to get 2nd hand at good price, not NVME ones. That's the fist thing I had to get my head around. I do have quite a few of them @ 10DWPD, there's no way a home lab will ever saturate that.


The reality is I have a few spare 1TB Samsung 980 NVME sitting around, they are only 600TBW on paper, or 0.3DPWD. Ceph would likely chew through them very quick. And I think ceph 3 node must all have same storage. And I don't have 10GBe.

Too many box to tick with ceph. I'm hoping GlusterFS is less demanding
 
Can't get my head around your suggestions. With your available HW & NW, & your aim to have "HA via Replication", don't use shared storage at all. Just keep it local on both nodes, no ZFS no CEPH & your good to go.
 
Can't get my head around your suggestions. With your available HW & NW, & your aim to have "HA via Replication", don't use shared storage at all. Just keep it local on both nodes, no ZFS no CEPH & your good to go.
Unless I'm mistaken, HA via Replication requires local ZFS storage.

https://pve.proxmox.com/wiki/Storage_Replication

I would recommend, if this is something that the OP really wants, then they should keep an eye on the secondary market for few enterprise level SSD drives. That's what I did.
 
With your available HW & NW, & your aim to have "HA via Replication"
Based on my current understanding. Ceph mandate 10GBE and same storage on all 3 nodes, and preferably similar compute power on 3 nodes, basically 3 full size node with enterprise SSD.

My aim is to have primary node + stand by node for services, some minimal investment to make key service HA, and a dedicated 321 backup solution which synology covers.

So its either replication with interval on big + medium + Q/tiny node, or Ceph on big + big + big node. Leaving "HA via Replication" as my only choice based on HW

please don't use glusterfs, it'is a dead project, redhat have abandonned it and it's will be EOL at the end of 2024.

Thanks for the info, its really important for me. I was thinking Swarm + Glusterfs, one is dead, one is dying...


Sounds like either K3S + long horn, or PVE replication HA with time gap, or very HW demanding Ceph.

It's a simple choice now, I was putting off Kubernetes as overkill but it actually now make sense in my asymmetrical HW nodes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!