Removing node, Permission denied

Proximate

Member
Feb 13, 2022
219
12
23
64
I set up a temp cluster so I could migrate vms from the old host to the new host.
When done, I wanted to remove the node (called p) from the cluster but I get this.


# pvecm delnode p
Killing node 1
unable to open file '/etc/pve/corosync.conf.new.tmp.78679' - Permission denied
Code:
# pvecm status
Cluster information
-------------------
Name:             tempClust
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Mon Nov  6 17:45:02 2023
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000002
Ring ID:          1.16
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.192.215
0x00000002          1 192.168.192.243 (local)

# pvecm nodes

Membership information
----------------------
    Nodeid      Votes Name
         1          1 p
         2          1 pro01 (local)

# pvecm delnode p
Killing node 1
unable to open file '/etc/pve/corosync.conf.new.tmp.78679' - Permission denied

# pvecm nodes

Membership information
----------------------
    Nodeid      Votes Name
         2          1 pro01 (local)
root@pro01:~# pvecm status
Cluster information
-------------------
Name:             tempClust
Config Version:   2
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Mon Nov  6 17:46:17 2023
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000002
Ring ID:          2.1a
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:

Membership information
----------------------
    Nodeid      Votes Name
0x00000002          1 192.168.192.243 (local)

On the GUI of the new host, I can still see both nodes but the 'p' node is a red x.
Figured it should simply be gone now, no?

In another post, it is suggested to try the following;


Code:
# ls -la /etc/pve/nodes/
total 0
dr-xr-xr-x 2 root www-data 0 Oct 13 11:54 .
drwxr-xr-x 2 root www-data 0 Dec 31  1969 ..
dr-xr-xr-x 2 root www-data 0 Oct 13 11:54 p
dr-xr-xr-x 2 root www-data 0 Nov  5 20:17 pro01

# rm -rf /etc/pve/nodes/p/
rm: cannot remove '/etc/pve/nodes/p/lxc': Permission denied
rm: cannot remove '/etc/pve/nodes/p/pve-ssl.key': Permission denied
rm: cannot remove '/etc/pve/nodes/p/lrm_status': Permission denied
rm: cannot remove '/etc/pve/nodes/p/pve-ssl.pem': Permission denied
rm: cannot remove '/etc/pve/nodes/p/priv': Permission denied
rm: cannot remove '/etc/pve/nodes/p/openvz': Permission denied
rm: cannot remove '/etc/pve/nodes/p/qemu-server': Permission denied

# service pve-cluster restart
# ls -la /etc/pve/nodes/
total 0
dr-xr-xr-x 2 root www-data 0 Oct 13 11:54 .
drwxr-xr-x 2 root www-data 0 Dec 31  1969 ..
dr-xr-xr-x 2 root www-data 0 Oct 13 11:54 p
dr-xr-xr-x 2 root www-data 0 Nov  5 20:17 pro01

No change. The node continues to show in the hosts list from the new host.
 
Last edited:
Unfortunately this is not how clusters in PVE work, you can't really kill the cluster by simply removing 1 of the 2 nodes, 1 node in a cluster is basically unable to form a quorum. You have to kill your cluster as such.

On the surviving node, this should help:

https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_separate_node_without_reinstall

If you still see the dead p in GUI, then after the above was done, you can rm the parts in /etc/pve/nodes/p/.
 
  • Like
Reactions: Proximate
I understand that now :). Thank you for helping me.

I booted the node I tried to remove. On that node (p), it cannot see any nodes or any status;

Code:
root@p:~# # pvecm nodes
root@p:~# # pvecm status
root@p:~# # pvecm status
root@p:~# # pvecm nodes
root@p:~# # pvecm nodes

Yet, the node remained seen on the new host;

Code:
root@pro01:~# pvecm nodes

Membership information
----------------------
    Nodeid      Votes Name
         1          1 p
         2          1 pro01 (local)
root@pro01:~# pvecm delnode p
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
cfs-lock 'file-corosync_conf' error: got lock request timeout
root@pro01:~# pvecm delnode p
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
trying to acquire cfs lock 'file-corosync_conf' ...
cfs-lock 'file-corosync_conf' error: got lock request timeout

The solution was to remove the cluster completely;

Code:
systemctl stop pve-cluster corosync
pmxcfs -l
rm /etc/corosync/*
rm /etc/pve/corosync.conf
killall pmxcfs
systemctl start pve-cluster
 
Last edited:
  • Like
Reactions: Winteller
Now the question is. If I build a second host, create a new cluster, will I have something left over that will break doing that?
 
Now the question is. If I build a second host, create a new cluster, will I have something left over that will break doing that?
Pretty sure if you were getting official support they would advice to tear down the whole thing (with VMs backed up to restore) and start from scratch just to avoid having skeletons in the closet later on. :)

But the linked method basically brings you back to the state of a node after a new install with no clustering. So if you have now two nodes each clearly NOT thinking it is in any cluster, choose one to create a new cluster with (I would choose the fresh install one) and then make the other one to join it. You should be fine.

The guide basically made the node to get its pmxcfs back into local mode, if you are interested more in how this works you may want to read on it a bit from here:
https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)

But I am sure (I hope) there's better 3rd party pmxcfs articles on the internet.
 
Thank you very much. I'll check that link and see what I can learn from it.

BTW, running those commands on the new node worked as you mentioned.
Running them on the 'p' node (yes, should have named it better) never removes the other node. It's always shown in the Datacenter list.

Looking at the cluster however, both nodes show no cluster.
I'll just rebuild the 'p' node and it should be fine now, I hope also :).
 
It's always shown in the Datacenter list.
First I would reload the web app, second I would check ls /etc/pve/nodes in the node that is showing you clutter in the GUI list of nodes, if nothing there I would wonder what was left in the cat /etc/corosync/corosync.conf. But if it shows it greyed out, it's not going to disrupt anything.
 
I had already shut down the 'p' node but on the new node, with no cluster, I still see the node in one of the files.

Code:
# ls /etc/pve/nodes
p  pro01

# cat /etc/corosync/corosync.conf
cat: /etc/corosync/corosync.conf: No such file or directory
 
Code:
# cat /etc/corosync/corosync.conf
cat: /etc/corosync/corosync.conf: No such file or directory
it's off the cluster, good.

I had already shut down the 'p' node but on the new node, with no cluster, I still see the node in one of the files.

Code:
# ls /etc/pve/nodes
p  pro01

there you go, rm -rf /etc/pve/nodes/p and reload the webapp, do not make a typo there. :)
 
  • Like
Reactions: Proximate
All this worked. Thank you very much for your help.
And, now I know better how to remove a node correctly.
 
You're welcome, you can set the thread as solved for others in where you edit the thread title.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!