Hi all,
I have a test setup of the latest non-subscription pve-4.2 cluster of 2 nodes running pve-kernel-4.4.8-1.
root@proxmox01:~# pveversion -v
proxmox-ve: 4.2-49 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-4 (running version: 4.2-4/2660193c)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.8-1-pve: 4.4.8-49
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-74
pve-firmware: 1.1-8
libpve-common-perl: 4.0-60
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-15
pve-container: 1.0-63
pve-firewall: 2.0-26
pve-ha-manager: 1.0-31
ksm-control-daemon: 1.2-1
glusterfs-client: 3.7.2-1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
fence-agents-pve: 4.0.20-1
openvswitch-switch: 2.3.2-3
root@proxmox01:~# pvecm status
Quorum information
------------------
Date: Tue May 10 10:15:44 2016
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 388
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.0.185 (local)
0x00000002 1 192.168.0.186
Now what I see is when shutting down one of the nodes the second one gets fenced. The only way to prevent this is to execute "pvecm expect 1" on the running node when shutting down the other one but this is not an option if the node actually crashes instead being stopped/restarted. Is there any way to make this permanent or tell pve that this is a 2-node cluster so it is ok to continue with one vote? When using pacemaker with corosync for example I set:
quorum {
provider: corosync_votequorum
two_node: 1
}
Thanks,
Igor
I have a test setup of the latest non-subscription pve-4.2 cluster of 2 nodes running pve-kernel-4.4.8-1.
root@proxmox01:~# pveversion -v
proxmox-ve: 4.2-49 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-4 (running version: 4.2-4/2660193c)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.8-1-pve: 4.4.8-49
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-74
pve-firmware: 1.1-8
libpve-common-perl: 4.0-60
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-15
pve-container: 1.0-63
pve-firewall: 2.0-26
pve-ha-manager: 1.0-31
ksm-control-daemon: 1.2-1
glusterfs-client: 3.7.2-1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
fence-agents-pve: 4.0.20-1
openvswitch-switch: 2.3.2-3
root@proxmox01:~# pvecm status
Quorum information
------------------
Date: Tue May 10 10:15:44 2016
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 388
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.0.185 (local)
0x00000002 1 192.168.0.186
Now what I see is when shutting down one of the nodes the second one gets fenced. The only way to prevent this is to execute "pvecm expect 1" on the running node when shutting down the other one but this is not an option if the node actually crashes instead being stopped/restarted. Is there any way to make this permanent or tell pve that this is a 2-node cluster so it is ok to continue with one vote? When using pacemaker with corosync for example I set:
quorum {
provider: corosync_votequorum
two_node: 1
}
Thanks,
Igor