Start a vm without turning on all nodes

Dolait Lu

New Member
May 20, 2019
3
0
1
47
Hi
I recently setup a proxmox cluster with 6 nodes for our office. I noticed that I will have to turn on all 6 nodes in order to start a VM, otherwise proxmox will complain about no quorum and no vm will start.

My question is simple: could I start a VM on node 1 while other nodes turned off?
 
Hi
I recently setup a proxmox cluster with 6 nodes for our office. I noticed that I will have to turn on all 6 nodes in order to start a VM, otherwise proxmox will complain about no quorum and no vm will start.

You do not need all, only a majority (4 in your case).

My question is simple: could I start a VM on node 1 while other nodes turned off?

You can temporarily set expected votes to a lower value:

# pvecm expected <number_of_nodes_online>

But only do that if you are sure the other nodes are really offline.
 
Hi Deitmar
Thank you!

If I set, for example, to 2 expected votes, what consequence the cluster will have if other nodes were turned on later?

Best,
Dolait
 
For some of us who run Proxmox for dev envs and small offices, can we get a way to set this permanently such that a quorum can always be 1 in a 2 server config? Even having a simple web interface option that we could set would be helpful. It would have saved me over an hour of headache today if I'd known about this... :(
 
For some of us who run Proxmox for dev envs and small offices, can we get a way to set this permanently such that a quorum can always be 1 in a 2 server config?

You can give one server more than one vote. That way you always have quorum if that server is online.
 
  • Like
Reactions: nikodll
@oguz is this what @dietmar is referring to or something else as you seem to indicate? The former option seems a simpler or more clean solution....
 
@oguz is this what @dietmar is referring to or something else as you seem to indicate?

no, different setups.

what @dietmar suggested, is to simply give one of the nodes an extra vote. that way with 2 nodes, if you have the high-vote node online, your cluster quorum will stay. a drawback of this setup is, when you want to for example do maintenance on the high-vote node, you will have to set the votes back, give the other one an extra vote, and then you will be able to take it offline.

with a qdevice setup, you add a 3rd machine in the setup, but it isn't in the cluster itself, but an observer. that way you have one extra vote, and you don't have to worry about giving extra votes to nodes to keep quorum. read the link i've sent for more information.
 
Dear Members, Dear Staff,

I have to check disaster recovery procedure on a 3 nodes (pve1, pve2, pve3) cluster with ceph (RBD storage).
Everything works fine, in case of one node failure the cluster works as expected.
I would like to test starting VM's on a single node without cluster.

This is my workaround:
1. all nodes online, VM's are running on pve2
2. I displug network cables from pve2 and pve3, only pve1 is available on the network.
3. pve1 restarts automatically.
4. I login to pve1 via ssh, and run
pvecm expected 1

... based on this forum entry:

"You can temporarily set expected votes to a lower value:
# pvecm expected <number_of_nodes_online>
But only do that if you are sure the other nodes are really offline."

5. I move vm's conf files from /etc/pve/nodes/pve2/qemu-server to /etc/pve/nodes/pve1/qemu-server

6. VMs are available on the pve1 on web interface, VM's status are powered off.

7. I try to start VM with start button, but the process indicator is just rotating.

Here is a journalctl log details:

root@pve1:~# journalctl -f
-- Logs begin at Mon 2020-11-16 19:02:17 CET. --
Nov 16 19:28:34 pve1 pvestatd[1543]: status update time (5.309 seconds)
Nov 16 19:28:38 pve1 ceph-mon[1451]: 2020-11-16 19:28:38.333 7f6ff1591700 -1 mon.pve1@0(probing) e3 get_health_metrics reporting 2 slow ops, oldest is auth(proto 0 73 bytes epoch 0)
Nov 16 19:28:39 pve1 pvedaemon[1564]: <root@pam> successful auth for user 'root@pam'
Nov 16 19:28:43 pve1 pvestatd[1543]: got timeout
Nov 16 19:28:43 pve1 ceph-mon[1451]: 2020-11-16 19:28:43.337 7f6ff1591700 -1 mon.pve1@0(probing) e3 get_health_metrics reporting 2 slow ops, oldest is auth(proto 0 73 bytes epoch 0)
Nov 16 19:28:43 pve1 pvestatd[1543]: status update time (5.332 seconds)
Nov 16 19:28:48 pve1 ceph-mon[1451]: 2020-11-16 19:28:48.337 7f6ff1591700 -1 mon.pve1@0(probing) e3 get_health_metrics reporting 2 slow ops, oldest is auth(proto 0 73 bytes epoch 0)
Nov 16 19:28:53 pve1 ceph-mon[1451]: 2020-11-16 19:28:53.337 7f6ff1591700 -1 mon.pve1@0(probing) e3 get_health_metrics reporting 2 slow ops, oldest is auth(proto 0 73 bytes epoch 0)
Nov 16 19:28:53 pve1 pvestatd[1543]: got timeout
Nov 16 19:28:53 pve1 pvestatd[1543]: status update time (5.316 seconds)
Nov 16 19:28:58 pve1 ceph-mon[1451]: 2020-11-16 19:28:58.337 7f6ff1591700 -1 mon.pve1@0(probing) e3 get_health_metrics reporting 2 slow ops, oldest is auth(proto 0 73 bytes epoch 0)
Nov 16 19:29:00 pve1 systemd[1]: Starting Proxmox VE replication runner...
Nov 16 19:29:00 pve1 systemd[1]: pvesr.service: Succeeded.
Nov 16 19:29:00 pve1 systemd[1]: Started Proxmox VE replication runner.
Nov 16 19:29:03 pve1 ceph-mon[1451]: 2020-11-16 19:29:03.337 7f6ff1591700 -1 mon.pve1@0(probing) e3 get_health_metrics reporting 2 slow ops, oldest is auth(proto 0 73 bytes epoch 0)
Nov 16 19:29:04 pve1 pvestatd[1543]: got timeout
Nov 16 19:29:04 pve1 pvestatd[1543]: status update time (5.321 seconds)
Nov 16 19:29:08 pve1 ceph-mon[1451]: 2020-11-16 19:29:08.337 7f6ff1591700 -1 mon.pve1@0(probing) e3 get_health_metrics reporting 2 slow ops, oldest is auth(proto 0 73 bytes epoch 0)
Nov 16 19:29:13 pve1 ceph-mon[1451]: 2020-11-16 19:29:13.337 7f6ff1591700 -1 mon.pve1@0(probing) e3 get_health_metrics reporting 2 slow ops, oldest is auth(proto 0 73 bytes epoch 0)
Nov 16 19:29:13 pve1 pvestatd[1543]: got timeout


What is the right method to start VM's on pve1 node?

PVE version details:
===================================================
proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
pve-manager: 6.2-15 (running version: 6.2-15/48bd51b6)
pve-kernel-5.4: 6.2-7
pve-kernel-helper: 6.2-7
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 14.2.11-pve1
ceph-fuse: 14.2.11-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-4
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-10
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.1-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.3-10
pve-cluster: 6.2-1
pve-container: 3.2-2
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-6
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-19
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve2
===================================================


thank you,
 
Last edited:
You do not need all, only a majority (4 in your case).



You can temporarily set expected votes to a lower value:

# pvecm expected <number_of_nodes_online>

But only do that if you are sure the other nodes are really offline.
I did pvecm expected 1 while my other nodes were out of power... Is there any table in which i could know how many nodes should be in each case?

thank you
 
Hi,
I did pvecm expected 1 while my other nodes were out of power... Is there any table in which i could know how many nodes should be in each case?

thank you
to be quorate a cluster needs more than half the votes, i.e. ((#nodes + 1) / 2 rounded up). By default each node has one vote and usually this shouldn't be changed. If you have an even number of nodes, using a QDevice for vote support makes sense.

Please be careful with things like pvecm expected 1, it is intended as a last-resort measure.
 
You do not need all, only a majority (4 in your case).



You can temporarily set expected votes to a lower value:

# pvecm expected <number_of_nodes_online>

But only do that if you are sure the other nodes are really offline.
thanks!!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!