Unable to set expected votes to 1 in a 2-host-cluster

rty

New Member
Feb 3, 2023
10
1
1
In my cluster with two hosts, I want to set the number of expected votes to 1. Reason: stay operational if one host is offline (happens regularly).

I believe that I used to achieve this with pvecm expected 1, but I am no longer. It works if the second host is offline, but not in advance:
Code:
root@pve-1:~# pvecm expected 1
Unable to set expected votes: CS_ERR_INVALID_PARAM
What am I missing?
Code:
root@pve-1:~# pvecm status
Cluster information
-------------------
Name: MyCluster
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Sun Feb 25 19:24:08 2024
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000001
Ring ID:          1.448
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           2 
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 172.27.2.1 (local)
0x00000002          1 172.27.2.2
Code:
root@pve-1:/mnt/backup-pve1/backup-pve/dump# pvecm status
Cluster information
-------------------
Name: MyCluster
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Sun Feb 25 20:44:15 2024
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1.44c
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:            

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 172.27.2.1 (local)
Code:
root@pve-1:~# cat /etc/pve/corosync.conf 
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pve-1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 172.27.2.1
  }
  node {
    name: pve-2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 172.27.2.2
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
cluster_name: MyCluster
  config_version: 2
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}
root@pve-1:~# pveversion
pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-8-pve)
Code:
root@pve-1:~# pveversion --verbose
proxmox-ve: 8.1.0 (running kernel: 6.5.11-8-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5: 6.5.11-8
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.3-1
proxmox-backup-file-restore: 3.1.3-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-3
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.2-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1
 
Last edited:
Oh yes, it works perfectly if the second host is offline, which is the usual state.

However, I think it would be good to be able to reduce the required votes to 1. If the 2nd server goes down, the number of expected votes remains at 2:
Bash:
Feb 26 22:57:19 pve-1 corosync[1088]:   [CFG   ] Node 2 was shut down by sysadmin
Feb 26 22:57:19 pve-1 pmxcfs[986]: [dcdb] notice: members: 1/986
Feb 26 22:57:19 pve-1 pmxcfs[986]: [status] notice: members: 1/986
Feb 26 22:57:19 pve-1 corosync[1088]:   [QUORUM] Sync members[1]: 1
Feb 26 22:57:19 pve-1 corosync[1088]:   [QUORUM] Sync left[1]: 2
Feb 26 22:57:19 pve-1 corosync[1088]:   [TOTEM ] A new membership (1.474) was formed. Members left: 2
Feb 26 22:57:19 pve-1 corosync[1088]:   [QUORUM] This node is within the non-primary component and will NOT provide any services.
Feb 26 22:57:19 pve-1 corosync[1088]:   [QUORUM] Members[1]: 1
Feb 26 22:57:19 pve-1 corosync[1088]:   [MAIN  ] Completed service synchronization, ready to provide service.
Feb 26 22:57:19 pve-1 pmxcfs[986]: [status] notice: node lost quorum
Feb 26 22:57:20 pve-1 corosync[1088]:   [KNET  ] link: host: 2 link: 0 is down
Feb 26 22:57:20 pve-1 corosync[1088]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Feb 26 22:57:20 pve-1 corosync[1088]:   [KNET  ] host: host: 2 has no active links
Bash:
root@pve-1:/opt/backup-proxmox# pvecm status
Cluster information
-------------------
Name:             MyCluster
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Mon Feb 26 22:57:40 2024
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1.474
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:          

Membership information
----------------------
Nodeid      Votes Name
0x00000001          1 172.27.2.1 (local)
After that, pvecm expected 1 works as expected.

The 2nd one acts as backup creator and drop-in replacement. Wakes up casually, makes a backup and goes to sleep again.
 
Last edited:
To wrap it up: In case somebody needs to solve the same issue: my workaround is to regularly check if the second (casually running disaster fallback) server is down and if so, update the required quorum without interaction and notification.

Add cron job using `crontab -e`:

Code:
# update quorum if host pve-2 is down
MAILTO=""
*  *  *  *  *  if ! ping -c1 -w1 pve-2 > /dev/null; then pvecm expected 1; fi
 
that's rather dangerous, the right solution for such a setup is a qdevice as tie breaker.
 
  • Like
Reactions: Falk R.
In my case all I need from the cluster is to have a single dashboard to manage them, and do things like live migration. I do explicitly NOT want to depend on quorum etc, as I do not use any HA services etc. So crontab it is, thanks @rty!

Code:
# update quorum if other host is down
MAILTO=""
*  *  *  *  * pvecm expected 1
 
Last edited:
In my case all I need from the cluster is to have a single dashboard to manage them, and do things like live migration. I do explicitly NOT want to depend on quorum etc, as I do not use any HA services etc.
Then use tools like this one instead:
https://cluster-manager.fr/

If you run a cluster, you have a real cluster with various dependencies and you are always required to have a quorum.
 
  • Like
Reactions: blackline
Just one think ... Why keep two pve node if you don't want any HA services ?
Clean one of them, and convert him to Proxmox Backup Server. This one make more better work than initial save product in pve, and dedicated to your kind of use
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!