proxmox cluster 2 nodes how to remove one node and use only one

chalan

Member
Mar 16, 2015
119
4
16
hi i just bought a new server and want to migrate all vm from old to new... so i have created a cluster with 2 nodes (no HA) and migrated the VMs with GUI. Now i need to remove the old server and want to use only the new. is it possible? i dont need cluster any more (or maybe later in the future), but i am afraid to got any problems, as i have read here https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster#Two_nodes_cluster_and_quorum_issues

can somebody please tell me if i just have to remove one node as described here https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster#Remove_a_cluster_node and don t need to worry or i have do somthing else? thank you...
 
Last edited:
  • Like
Reactions: bpareek9694
nobody? i have

root@pve:~# pvecm nodes
Node Sts Inc Joined Name
1 M 48 2015-07-26 11:28:00 pve
2 M 4 2015-07-17 22:47:10 proxmox

root@pve:~# pvecm status
Version: 6.2.0
Config Version: 2
Cluster Name: elson-cluster-1
Cluster Id: 19291
Cluster Member: Yes
Cluster Generation: 48
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: proxmox
Node ID: 2
Multicast addresses: 239.192.75.166
Node addresses: 192.168.212.253

and need to permanently remove pve node. when i power off the pve node, cluster becomes unusable, see

Version: 6.2.0
Config Version: 2
Cluster Name: elson-cluster-1
Cluster Id: 19291
Cluster Member: Yes
Cluster Generation: 52
Membership state: Cluster-Member
Nodes: 1
Expected votes: 2
Total votes: 1
Node votes: 1
Quorum: 2 Activity blocked
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: proxmox
Node ID: 2
Multicast addresses: 239.192.75.166
Node addresses: 192.168.212.253

after i power on the pve node again, everything is ok... if i will make

pvecm delnode pve

will the cluster automaticaly expect only 1 vote and becomes funktional? or must i somehov destroy cluster to make proxmox node work separately?

can plz somebody help? i was stupid to make that damn cluster, i should migrate the vms with scp :(
 
Yeah for sure, but you do that only so you have quorum (needed in a 2 node cluster) and can remove the other (now offline) node. After that expected vote should be set correctly as there is one node less in the cluster.
 
ok, so the step by step should by as follows?
1.) shutdown pve node, remove from network
ssh to remaining node and perform:
2.) pvecm delnode pve
3.) pvecm expected 1

thats all?
 
You have to switch step 2 and 3. To remove a node you need quorum, and as the other note is offline your node has only one vote, so we tell her it should expect only one and the we can execute the operation.
After that be sure that the deleted node DOES NOT comes online again as it is. Be sure do disconnect it from the network, or wipe its root partition or something like that.
 
Last edited:
You have to switch step 2 and 3. To remove a node you need quorum, and as the other note is offline your node has only one vote, so we tell her it should expect only one and the we can execute the operation.
After that be sure that the deleted node DOES NOT comes online again as it is. Be sure do disconnect it from the network, or wipe its root partition or something like that.

both nodes are online now... i just made test to see how the pvecm status will show after reboot or shutdown the pve node... so after shutdown the pve node:

Version: 6.2.0
Config Version: 2
Cluster Name: elson-cluster-1
Cluster Id: 19291
Cluster Member: Yes
Cluster Generation: 52
Membership state: Cluster-Member
Nodes: 1
Expected votes: 2
Total votes: 1
Node votes: 1
Quorum: 2 Activity blocked
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: proxmox
Node ID: 2
Multicast addresses: 239.192.75.166
Node addresses: 192.168.212.253

and after power on pve node again its seem like this

root@pve:~# pvecm status
Version: 6.2.0
Config Version: 2
Cluster Name: elson-cluster-1
Cluster Id: 19291
Cluster Member: Yes
Cluster Generation: 48
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: proxmox
Node ID: 2
Multicast addresses: 239.192.75.166
Node addresses: 192.168.212.253

so do i need to switch 2 a 3 steps?
 
Yes, because if you poweroff one node this
Code:
Expected votes: 2
Total votes: 1
Node votes: 1
[B]Quorum: 2 Activity blocked[/B]
tells you that it expects two votes but got only one (the vote from himself) and so it blocks changes, so that nothing gets messed up.
As you know that one vote is OK because you delete the other note anyway, you have to execute
Code:
pcecm expected 1
so that it can change things in the cluster configuration. After that you may execute
Code:
pvecm deletenode nodename
As stated above be sure that the deleted node doesn't come online again, as it could mess things up irreversible.

Short answer: yes, you have to switch 2 and 3 :D
 
ok, so i did:

pvecm expected 1

after this pvecm status shows

Version: 6.2.0
Config Version: 2
Cluster Name: elson-cluster-1
Cluster Id: 19291
Cluster Member: Yes
Cluster Generation: 84
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: proxmox
Node ID: 2
Multicast addresses: 239.192.75.166
Node addresses: 192.168.212.253

so i log into pve node and halt that machine, after this pvecm status on remaining node show again

Version: 6.2.0
Config Version: 2
Cluster Name: elson-cluster-1
Cluster Id: 19291
Cluster Member: Yes
Cluster Generation: 88
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 2 Activity blocked
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: proxmox
Node ID: 2
Multicast addresses: 239.192.75.166
Node addresses: 192.168.212.253

now im afraid to make pvecm deletenode pve, or is it ok and i should perform it?
 
I think i was a bit confusing, steps are:
1.) shutdown pve node you want to remove, remove from network
ssh to remaining node and perform:
2.) pvecm expected 1
3.) pvecm delnode pve

In which machine did you log into and halt, after the "pvecm expected 1" command?
 
first i log into proxmox node (new one) and perform pvecm expected 1 after that i log into pve node (old one for removing) and perform halt... pvecm status is both from proxmox node... so what now? :)
 
Stop the old one first, then after the old node is powered down, perform the pvecm expected 1 command at the new one. And then you can delete the old node with pvecm delnode oldonde.
You had to swap step 2 and 3 not the steps 1 and 3 from your post :D

When you do it so then it works for sure, please read my posts from above carefully and follow them. I tested it and hadn't any troubles, so you should be good to.
 
Last edited:
ok the status now is that pve node (old one) is halted. i didnt perform pvecm delnode pve yet... so i log into proxmox node and have performed again pvecm expected 1 and the pvecm status is now

Version: 6.2.0
Config Version: 2
Cluster Name: elson-cluster-1
Cluster Id: 19291
Cluster Member: Yes
Cluster Generation: 88
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: proxmox
Node ID: 2
Multicast addresses: 239.192.75.166
Node addresses: 192.168.212.253

so now i will go into serverhousing and physicaly remove pve node (old one) from power and network and after that i will perform on the proxmox node (new one) pvecm delnode pve, is that correct? :) i hope so :)
 
Haha, yeah thats correct.

But you can wipe the old one, or disconnect it from network also later. It only mustn't go online in the same state.
Because then it thinks it is in the cluster and the other one thinks it isn't and everything can be messed up.

But your steps are fine.
 
Last edited:
so, i take the old server (pve) from serverhousing away, and on proxmox node i did pvecm delnode pve and everything seems ok, see

root@proxmox:~# pvecm nodes
Node Sts Inc Joined Name
2 M 4 2015-07-17 22:47:10 proxmox

root@proxmox:~# pvecm status
Version: 6.2.0
Config Version: 3
Cluster Name: elson-cluster-1
Cluster Id: 19291
Cluster Member: Yes
Cluster Generation: 88
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: proxmox
Node ID: 2
Multicast addresses: 239.192.75.166
Node addresses: 192.168.212.253

thank you very much, my last question in this thread, is this permanent? or just until reboot? if so, how to make it permanent? one "nasty" solution will be to add pvecm expected 1 to /etc/rc.local
 
No problem and looks good to me.

No, now it's permanent, you have an one node cluster now. You shouldn't need the "pvecm expected 1" command anymore.
So no nasty solutions needed, and it's safe to reboot for you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!