Quintin Cardinal

New Member
Mar 28, 2019
12
1
3
24
I noticed that there's a new feature under the pvecm command to add a qdevice. I have been using Proxmox at home and for clients in a single node local storage configuration for almost a year now, and have been dying to mess with a proper shared storage/two node HA setup. My dream setup would be ZFS over iSCSI to a FreeNAS box. I finally started building the entire thing virtually on my home PVE host, and was a little disappointed to find out that support for qdisks had been removed, and the only supported method is to use three PVE nodes. To my pleasant surprise, I see this in the latest change log for pve-cluster version 5.0-34:

* allow to setup and remove qdevice for cluster with the pvecm CLI tool

I'm not able to find a lot of information about this, most likely because it's a fresh new feature. The man page isn't very descriptive, other than providing information on how to properly use the "qdevice" flag. Is this going to mean we'll be able to have two node and shared storage clusters with a proper 3 quorum devices? Will there be more in-depth qdevice configuration procedure in the documentation soon? Just excited for it!
Thank you,
Quintin
 

oguz

Proxmox Staff Member
Staff member
Nov 19, 2018
1,259
139
63
Is this going to mean we'll be able to have two node and shared storage clusters with a proper 3 quorum devices?
Yes. Only requirement is being able to run `corosync-qdevice` and `corosync-qnetd` on the box.
It's usually advantageous for 2+1 setups.

Will there be more in-depth qdevice configuration procedure in the documentation soon?
Probably.
 
  • Like
Reactions: Quintin Cardinal

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
2,132
325
103
South Tyrol/Italy
  • Like
Reactions: Quintin Cardinal

Quintin Cardinal

New Member
Mar 28, 2019
12
1
3
24
Yes. Only requirement is being able to run `corosync-qdevice` and `corosync-qnetd` on the box.
It's usually advantageous for 2+1 setups.
You made my day! Now that I've got some free time with the weekend here, I'll be messing with it!
 

Quintin Cardinal

New Member
Mar 28, 2019
12
1
3
24
For anyone who may have been following this... I've been testing this virtually on my home Proxmox server with FreeNAS and two PVE hosts running as VMs. I'd spent a lot of time messing around with trying to get FreeNAS working as my qdevice, but the corosync-qnetd package isn't available in FreeBSD from what I'm able to determine. After not being able to make a Jail work, I tried creating a docker host and then docker container. It was awful, LXC wins the race in my opinion. My final solution was to put a Ubuntu LXC running on my home PVE host on the same virtual network, and install corosync-qnetd on it. In real life on hardware, I would use a Raspberry Pi for this as a cheap solution. My work uses Pis as Nagios satellite servers and jump boxes, so I'm pretty confident in Raspbian's stability. I used a link here to get ZFS over iSCSI working smoothly with FreeNAS. Quorum appears happy! Can't wait to test this on hardware when enough old servers come into my work for "recycle"!
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
2,132
325
103
South Tyrol/Italy
Glad you could find a solution which works for you! Yes, corosync on *BSD is not really maintained, AFAIK, so you need a bit of luck or time and no fear of messing with corosyncs build system :)

I would use a Raspberry Pi for this as a cheap solution. My work uses Pis as Nagios satellite servers and jump boxes, so I'm pretty confident in Raspbian's stability.
PIs are rather slow (relatively speaking), but yes they are really reliable (comes a bit with the relative simple HW design, IMO), I've still first generation PIs from ~2012 chipping along without any issue.
The most problematic thing on Raspberry Pis is probably their power supply, which are often quite cheap and may not be too reliable. I mean I'd rather have a full third node (and then do the storage distributed over those three with Ceph, or GlusterFS) than the 2 + 1 setup, as the load from a failed server is distributed better in the three full node case, and I can do maintenance with a bit reduced stress, but for certain, e.g., already existing, setups, or if the third node is (currently) simply not in the budget this makes totally sense.
 
  • Like
Reactions: Quintin Cardinal

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!