Error after changing node name

EvgeniyRepin

New Member
Jul 24, 2018
8
0
1
36
Russia
Please, forgive me for my English.
I changed the node name using the manual (https://pve.proxmox.com/wiki/Renaming_a_PVE_node), but my node was not empty. After rebooting the server, the "pct list" command displays errors:
ipcc_send_rec[1] failed: Connection refused
ipcc_send_rec[2] failed: Connection refused
ipcc_send_rec[3] failed: Connection refused
Unable to load access control list: Connection refused

Please, tell me how to fix it?

root@pve1-backups:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve)
pve-manager: not correctly installed (running version: 5.2-5/eb24855a)
pve-kernel-4.15: 5.2-4
pve-kernel-4.15.18-1-pve: 4.15.18-15
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-35
libpve-guest-common-perl: not correctly installed
libpve-http-server-perl: 2.0-9
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-1
proxmox-widget-toolkit: 1.0-19
pve-cluster: not correctly installed
pve-container: not correctly installed
pve-docs: 5.2-4
pve-firewall: not correctly installed
pve-firmware: 2.0-5
pve-ha-manager: not correctly installed
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: not correctly installed
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
 
Looks like the pve-cluster service is not running properly. Assert this by stopping it (`systemctl stop pve-cluster.service`). Then run `pmxcfs -l` to start the cluster file system in "local mode", go to /etc/pve/nodes, there should be a folder with the old node name, and most likely one with the new name as well. Move the contents from the old one to the new one - make sure the directory structure inside stays the same. Afterwards stop pmxcfs (`fusermount -u /etc/pve`) and restart the pve-cluster.service (or better: reboot).

Edit: Also make sure /etc/hosts and /etc/hostname contain the new name where required, before rebooting.
 
Then run `pmxcfs -l` to start the cluster file system in "local mode"
root@pve1-backups:~# pmxcfs -l
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option
[main] crit: fuse_mount error: File exists
[main] notice: exit proxmox configuration filesystem (-1)
 
the message with mountpoint not empty indicates that the stopping of pve-cluster was not successful.

As @wbumiller wrote:
  • stop the pve-cluster service with `systemctl stop pve-cluster.service`
  • make sure it is stopped by checking the output of `ps auxwf |grep pmxcfs`
  • in any case try to unmount pmxcfs by running `fusermount -u /etc/pve`
  • then try to start the service again
The logs (`journalctl -r` ) should provide more information if this doesn't work directly
 
the message with mountpoint not empty indicates that the stopping of pve-cluster was not successful.

As @wbumiller wrote:
  • stop the pve-cluster service with `systemctl stop pve-cluster.service`
  • make sure it is stopped by checking the output of `ps auxwf |grep pmxcfs`
  • in any case try to unmount pmxcfs by running `fusermount -u /etc/pve`
  • then try to start the service again
The logs (`journalctl -r` ) should provide more information if this doesn't work directly

root@pve1-backups:~# ps auxwf |grep pmxcfs
root 16059 0.0 0.0 12788 976 pts/1 S+ 15:36 0:00 \_ grep pmxcfs

root@pve1-backups:~# fusermount -u /etc/pve
fusermount: failed to unmount /etc/pve: Invalid argument

root@pve1-backups:~# journalctl -u pve-cluster.service
-- Logs begin at Wed 2018-07-25 11:01:05 MSK, end at Tue 2018-07-31 15:35:21 MSK. --
Jul 25 11:01:10 pve1-backups.buhphone.com systemd[1]: Starting The Proxmox VE cluster filesystem...
Jul 25 11:01:11 pve1-backups.buhphone.com pmxcfs[3135]: fuse: mountpoint is not empty
Jul 25 11:01:11 pve1-backups.buhphone.com pmxcfs[3135]: fuse: if you are sure this is safe, use the 'nonempty' mount option
Jul 25 11:01:11 pve1-backups.buhphone.com pmxcfs[3135]: [main] crit: fuse_mount error: File exists
Jul 25 11:01:11 pve1-backups.buhphone.com pmxcfs[3135]: [main] notice: exit proxmox configuration filesystem (-1)
Jul 25 11:01:11 pve1-backups.buhphone.com systemd[1]: pve-cluster.service: Control process exited, code=exited status=255
Jul 25 11:01:11 pve1-backups.buhphone.com systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Jul 25 11:01:11 pve1-backups.buhphone.com systemd[1]: pve-cluster.service: Unit entered failed state.
Jul 25 11:01:11 pve1-backups.buhphone.com systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Jul 31 15:34:42 pve1-backups.buhphone.com systemd[1]: Starting The Proxmox VE cluster filesystem...
Jul 31 15:34:42 pve1-backups.buhphone.com pmxcfs[15070]: fuse: mountpoint is not empty
Jul 31 15:34:42 pve1-backups.buhphone.com pmxcfs[15070]: fuse: if you are sure this is safe, use the 'nonempty' mount option
Jul 31 15:34:42 pve1-backups.buhphone.com pmxcfs[15070]: [main] crit: fuse_mount error: File exists
Jul 31 15:34:42 pve1-backups.buhphone.com pmxcfs[15070]: [main] notice: exit proxmox configuration filesystem (-1)
Jul 31 15:34:42 pve1-backups.buhphone.com systemd[1]: pve-cluster.service: Control process exited, code=exited status=255
Jul 31 15:34:42 pve1-backups.buhphone.com systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Jul 31 15:34:42 pve1-backups.buhphone.com systemd[1]: pve-cluster.service: Unit entered failed state.
Jul 31 15:34:42 pve1-backups.buhphone.com systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
 
hm - seems the filesystem is not mounted - according to the fusermount output...
if you stop the pve-cluster service, you can verify that it is indeed unmounted with `mount |grep pve`.
if the output is empty - it could have happened that something wrote in `/etc/pve`, without the filesystem being mounted:
check the contents of `/etc/pve` (`ls /etc/pve`) and move the files away. once the directory is empty - follow @wbumiller's suggestion.
 
hm - seems the filesystem is not mounted - according to the fusermount output...
if you stop the pve-cluster service, you can verify that it is indeed unmounted with `mount |grep pve`.
if the output is empty - it could have happened that something wrote in `/etc/pve`, without the filesystem being mounted:
check the contents of `/etc/pve` (`ls /etc/pve`) and move the files away. once the directory is empty - follow @wbumiller's suggestion.
root@pve1-backups:~# systemctl stop pve-cluster.service
root@pve1-backups:~# mount | grep pve
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)

root@pve1-backups:~# ls /etc/pve/
authkey.pub datacenter.cfg local lxc nodes openvz priv pve-root-ca.pem pve-www.key qemu-server storage.cfg user.cfg vzdump.cron

root@pve1-backups:~# mv /etc/pve/* /home/backup/pve_1_08_18/
root@pve1-backups:~# ls /etc/pve/
root@pve1-backups:~#

root@pve1-backups:~# pmxcfs -l
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option
[main] crit: fuse_mount error: File exists
[main] notice: exit proxmox configuration filesystem (-1)

root@pve1-backups:~# fusermount -u /etc/pve
fusermount: failed to unmount /etc/pve: Invalid argument

After reboot

root@pve1-backups:~# pct list
ipcc_send_rec[1] failed: Connection refused
ipcc_send_rec[2] failed: Connection refused
ipcc_send_rec[3] failed: Connection refused
Unable to load access control list: Connection refused
root@pve1-backups:~# ls -alh /etc/pve/
total 36K
drwxr-xr-x 2 root root 8 Aug 1 07:49 .
drwxr-xr-x 97 root root 189 Jul 25 09:47 ..
-r--r----- 1 root www-data 8.5K Jan 1 1970 .clusterlog
-rw-r----- 1 root www-data 2 Jan 1 1970 .debug
-r--r----- 1 root www-data 50 Jan 1 1970 .members
-r--r----- 1 root www-data 776 Jan 1 1970 .rrd
-r--r----- 1 root www-data 417 Jan 1 1970 .version
-r--r----- 1 root www-data 238 Jan 1 1970 .vmlist
root@pve1-backups:~# date
Wed Aug 1 08:22:31 MSK 2018

root@pve1-backups:~# systemctl stop pve-cluster.service
root@pve1-backups:~# pmxcfs -l
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option
[main] crit: fuse_mount error: File exists
[main] notice: exit proxmox configuration filesystem (-1)
root@pve1-backups:~# journalctl -u pve-cluster.service
-- Logs begin at Wed 2018-08-01 07:53:41 MSK, end at Wed 2018-08-01 08:26:59 MSK. --
Aug 01 07:53:46 pve1-backups.buhphone.com systemd[1]: Starting The Proxmox VE cluster filesystem...
Aug 01 07:53:46 pve1-backups.buhphone.com pmxcfs[3629]: fuse: mountpoint is not empty
Aug 01 07:53:46 pve1-backups.buhphone.com pmxcfs[3629]: fuse: if you are sure this is safe, use the 'nonempty' mount option
Aug 01 07:53:46 pve1-backups.buhphone.com pmxcfs[3629]: [main] crit: fuse_mount error: File exists
Aug 01 07:53:46 pve1-backups.buhphone.com pmxcfs[3629]: [main] notice: exit proxmox configuration filesystem (-1)
Aug 01 07:53:46 pve1-backups.buhphone.com systemd[1]: pve-cluster.service: Control process exited, code=exited status=255
Aug 01 07:53:46 pve1-backups.buhphone.com systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Aug 01 07:53:46 pve1-backups.buhphone.com systemd[1]: pve-cluster.service: Unit entered failed state.
Aug 01 07:53:46 pve1-backups.buhphone.com systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
 
maybe there are also 'hidden' or dot files (filename starting with '.' ) in the directory - `ls -a` should show them - they need to be moved as well
(or you could move the directory /etc/pve and recreate it with the same permissions/ownership)
 
  • Like
Reactions: EvgeniyRepin
maybe there are also 'hidden' or dot files (filename starting with '.' ) in the directory - `ls -a` should show them - they need to be moved as well
(or you could move the directory /etc/pve and recreate it with the same permissions/ownership)
Thank you very very much!!!
root@pve1-backups:~# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-08-01 11:45:18 MSK; 3min 23s ago
Process: 3169 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
Process: 3132 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
Main PID: 3161 (pmxcfs)
Tasks: 6 (limit: 4915)
Memory: 31.8M
CPU: 328ms
CGroup: /system.slice/pve-cluster.service
└─3161 /usr/bin/pmxcfs

Aug 01 11:45:17 pve1-backups.buhphone.com systemd[1]: Starting The Proxmox VE cluster filesystem...
Aug 01 11:45:18 pve1-backups.buhphone.com systemd[1]: Started The Proxmox VE cluster filesystem.

But 'pct list' is empty, how to restore my containers?
 
I guess it's empty due to the change of the node name - see @wbumiller's post for the likely next steps.
 
depending on how you moved the files, that might be a result of pmxcfs internals (since the vmids (both for VMs and for containers) need to be unique across a cluster, it is not possible to have a config-file with the same id twice in the pmxcfs (i.e. `cp` won't work, `mv` should work)
 
depending on how you moved the files, that might be a result of pmxcfs internals (since the vmids (both for VMs and for containers) need to be unique across a cluster, it is not possible to have a config-file with the same id twice in the pmxcfs (i.e. `cp` won't work, `mv` should work)
Thanks, I used 'F6' into MidnightCommander, 'mv' has worked. Where does Proxmox store information about nodes (after I deleted '/etc/pve/*' and rebooted, the folders was recovered)?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!