Qdisk

adamb

Famous Member
Mar 1, 2012
1,322
72
113
Having some issues setting up a qdisk.

iscsi part of things seem to be ok. I have a /dev/sdd on both nodes.

Disk /dev/sdd: 12 MB, 12582912 bytes
1 heads, 24 sectors/track, 1024 cylinders
Units = cylinders of 24 * 512 = 12288 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xefc19d3b


Device Boot Start End Blocks Id System
/dev/sdd1 2 1024 12276 83 Linux

No matter what I do the quorum disk is reported as offline.

root@medprox1:/var/log/cluster# clustat
Cluster Status for medprox @ Wed Feb 6 12:15:50 2013
Member Status: Quorate


Member Name ID Status
------ ---- ---- ------
medprox1 1 Online, Local
medprox2 2 Online
/dev/block/8:49 0 Offline, Quorum Disk

Can't seem to pin down what the issue is. Any ideas or tips are greatly appreciated.

Here is my qdisk.log

root@medprox1:/var/log/cluster# cat qdiskd.log
Feb 06 11:34:24 qdiskd Quorum Partition: /dev/block/8:49 Label: medprox_qdisk
Feb 06 11:34:24 qdiskd Quorum Daemon Initializing
Feb 06 11:34:51 qdiskd Initial score 0/3
Feb 06 11:34:51 qdiskd Initialization complete
Feb 06 11:35:26 qdiskd Unregistering quorum device.
Feb 06 11:35:31 qdiskd Quorum Partition: /dev/block/8:49 Label: medprox_qdisk
Feb 06 11:35:31 qdiskd Quorum Daemon Initializing
Feb 06 11:35:59 qdiskd Initial score 0/3
Feb 06 11:35:59 qdiskd Initialization complete
Feb 06 11:54:51 qdiskd Unregistering quorum device.
Feb 06 11:57:51 qdiskd Quorum Partition: /dev/block/8:49 Label: medprox_qdisk
Feb 06 11:57:51 qdiskd Quorum Daemon Initializing
Feb 06 11:58:18 qdiskd Initial score 0/3
Feb 06 11:58:18 qdiskd Initialization complete
Feb 06 12:04:30 qdiskd Node 2 shutdown
 
Just tried to create it again and notice this.

Feb 06 12:19:58 qdiskd 3 matches found for label 'medprox_qdisk'; please use 'device=' instead!

How do I remove the matches?
 
Looks like a reboot removed my duplicate entries. But I still am unable to get the quorum disk to come online.

root@medprox2:~# clustat
Cluster Status for medprox @ Wed Feb 6 14:16:48 2013
Member Status: Quorate


Member Name ID Status
------ ---- ---- ------
medprox1 1 Online
medprox2 2 Online, Local
/dev/block/8:49 0 Offline, Quorum Disk


root@medprox2:~# pvecm nodes
Node Sts Inc Joined Name
0 X 0 /dev/block/8:49
1 M 116 2013-02-06 14:16:31 medprox1
2 M 112 2013-02-06 14:15:58 medprox2

Disk /dev/sdd: 12 MB, 12582912 bytes
1 heads, 24 sectors/track, 1024 cylinders
Units = cylinders of 24 * 512 = 12288 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xefc19d3b


Device Boot Start End Blocks Id System
/dev/sdd1 2 1024 12276 83 Linux

Here is my cluster.conf

root@medprox2:~# cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster config_version="3" name="medprox">
<cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<quorumd allow_kill="0" interval="3" label="medprox_qdisk" tko="10">
<heuristic interval="3" program="ping $GATEWAY -c1 -w1" score="1" tko="4"/>
<heuristic interval="3" program="ip addr | grep eth1 | grep -q UP" score="2" tko="3"/>
</quorumd>
<totem token="54000"/>
<fencedevices>
<fencedevice agent="fence_ipmilan" ipaddr="10.80.12.149" lanplus="1" login="USERID" name="ipmi1" passwd="PASSW0RD" power_wait="5"/>
<fencedevice agent="fence_ipmilan" ipaddr="10.80.12.150" lanplus="1" login="USERID" name="ipmi2" passwd="PASSW0RD" power_wait="5"/>
</fencedevices>
<clusternodes>
<clusternode name="medprox1" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="ipmi1"/>
</method>
</fence>
</clusternode>
<clusternode name="medprox2" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="ipmi2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
</rm>
</cluster>

Here is my qdisk

root@medprox1:~# mkqdisk -L
mkqdisk v1352871249


/dev/block/8:49:
/dev/disk/by-id/scsi-1IET_00010001-part1:
/dev/disk/by-path/ip-10.80.12.86:3260-iscsi-iqn.2013-02.com.ccs.sysadminnas:nas-lun-1-part1:
/dev/sdd1:
Magic: eb7a62c2
Label: medprox_qdisk
Created: Wed Feb 6 14:02:11 2013
Host: medprox1
Kernel Sector Size: 512
Recorded Sector Size: 512
 
Last edited:
I changed this line in cluster.conf

<heuristic interval="3" program="ip addr | grep eth1 | grep -q UP" score="2" tko="3"/>

to this

<heuristic interval="3" program="ip addr | grep eth2 | grep -q UP" score="2" tko="3"/>

Then restarted rgmanager and cman on both nodes. The the quorum came online. Eth2 is the network which the nodes would connect to the quorum disk over. Luckily this cluster is not into production yet. However I have two clusters which I need to add quorum disk to, and this leaves me a bit nervous. Can anyone confirm that the "eth1" was my issue? I appreciate the input.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!