[SOLVED] Issues after renaming host, unable to connect to web gui and VMs not working.

jml

New Member
Nov 9, 2020
3
0
1
45
Currently running Proxmox on 1 host (baremetal)

I attempted to rename my host, via hostname and hosts files.

See below for most of relevant information.

thank you for your help




root@nas:/etc/pve/local# hostname --ip-address
10.10.0.4

This looks ok

I successfully followed this wiki page


https://pve.proxmox.com/wiki/Proxmox_SSL_Error_Fixing



root@nas:/etc/pve/local# journalctl -xe
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pvesr.service has begun execution.
--
-- The job identifier is 11146.
Nov 09 15:20:01 nas pvesr[110066]: ipcc_send_rec[1] failed: Connection refused
Nov 09 15:20:01 nas pvesr[110066]: ipcc_send_rec[2] failed: Connection refused
Nov 09 15:20:01 nas pvesr[110066]: ipcc_send_rec[3] failed: Connection refused
Nov 09 15:20:01 nas pvesr[110066]: Unable to load access control list: Connection refused
Nov 09 15:20:01 nas systemd[1]: pvesr.service: Main process exited, code=exited, status=111/n/a
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- An ExecStart= process belonging to unit pvesr.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 111.
Nov 09 15:20:01 nas systemd[1]: pvesr.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit pvesr.service has entered the 'failed' state with result 'exit-code'.
Nov 09 15:20:01 nas systemd[1]: Failed to start Proxmox VE replication runner.
-- Subject: A start job for unit pvesr.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pvesr.service has finished with a failure.
--
-- The job identifier is 11146 and the job result is failed.
Nov 09 15:20:01 nas cron[3009]: (*system*vzdump) CAN'T OPEN SYMLINK (/etc/cron.d/vzdump)
Nov 09 15:21:00 nas systemd[1]: Starting Proxmox VE replication runner...
-- Subject: A start job for unit pvesr.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pvesr.service has begun execution.
--
-- The job identifier is 11202.
Nov 09 15:21:01 nas pvesr[110427]: ipcc_send_rec[1] failed: Connection refused
Nov 09 15:21:01 nas pvesr[110427]: ipcc_send_rec[2] failed: Connection refused
Nov 09 15:21:01 nas pvesr[110427]: ipcc_send_rec[3] failed: Connection refused
Nov 09 15:21:01 nas pvesr[110427]: Unable to load access control list: Connection refused
Nov 09 15:21:01 nas systemd[1]: pvesr.service: Main process exited, code=exited, status=111/n/a
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- An ExecStart= process belonging to unit pvesr.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 111.
Nov 09 15:21:01 nas systemd[1]: pvesr.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit pvesr.service has entered the 'failed' state with result 'exit-code'.
Nov 09 15:21:01 nas systemd[1]: Failed to start Proxmox VE replication runner.
-- Subject: A start job for unit pvesr.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pvesr.service has finished with a failure.
--
-- The job identifier is 11202 and the job result is failed.
Nov 09 15:21:01 nas cron[3009]: (*system*vzdump) CAN'T OPEN SYMLINK (/etc/cron.d/vzdump)



root@nas:/etc/pve/local# pvecm status
ipcc_send_rec[1] failed: Connection refused
ipcc_send_rec[2] failed: Connection refused
ipcc_send_rec[3] failed: Connection refused
Unable to load access control list: Connection refused


root@nas:/etc/pve/local# systemctl status pve-cluster pveproxy pvedaemon
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2020-11-09 15:19:42 CST; 5min ago
Process: 109845 ExecStart=/usr/bin/pmxcfs (code=exited, status=255/EXCEPTION)

Nov 09 15:19:42 nas systemd[1]: pve-cluster.service: Service RestartSec=100ms expired, scheduling restart.
Nov 09 15:19:42 nas systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 5.
Nov 09 15:19:42 nas systemd[1]: Stopped The Proxmox VE cluster filesystem.
Nov 09 15:19:42 nas systemd[1]: pve-cluster.service: Start request repeated too quickly.
Nov 09 15:19:42 nas systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Nov 09 15:19:42 nas systemd[1]: Failed to start The Proxmox VE cluster filesystem.

● pveproxy.service - PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-11-09 15:15:10 CST; 10min ago
Process: 107960 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited, status=111)
Process: 107967 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
Main PID: 107968 (pveproxy)
Tasks: 4 (limit: 4915)
Memory: 129.7M
CGroup: /system.slice/pveproxy.service
├─107968 pveproxy
├─107969 pveproxy worker
├─107970 pveproxy worker
└─107971 pveproxy worker

Nov 09 15:15:10 nas pvecm[107960]: ipcc_send_rec[1] failed: Connection refused
Nov 09 15:15:10 nas pvecm[107960]: ipcc_send_rec[2] failed: Connection refused
Nov 09 15:15:10 nas pvecm[107960]: ipcc_send_rec[3] failed: Connection refused
Nov 09 15:15:10 nas pvecm[107960]: Unable to load access control list: Connection refused
Nov 09 15:15:10 nas pveproxy[107968]: starting server
Nov 09 15:15:10 nas pveproxy[107968]: starting 3 worker(s)
Nov 09 15:15:10 nas pveproxy[107968]: worker 107969 started
Nov 09 15:15:10 nas pveproxy[107968]: worker 107970 started
Nov 09 15:15:10 nas pveproxy[107968]: worker 107971 started
Nov 09 15:15:10 nas systemd[1]: Started PVE API Proxy Server.

● pvedaemon.service - PVE API Daemon
Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-11-09 14:58:59 CST; 26min ago
Process: 92678 ExecStart=/usr/bin/pvedaemon start (code=exited, status=0/SUCCESS)
Main PID: 92684 (pvedaemon)
Tasks: 4 (limit: 4915)
Memory: 126.0M
CGroup: /system.slice/pvedaemon.service
├─92684 pvedaemon
├─92685 pvedaemon worker
├─92686 pvedaemon worker
└─92687 pvedaemon worker

Nov 09 14:58:58 nas systemd[1]: Starting PVE API Daemon...
Nov 09 14:58:59 nas pvedaemon[92684]: starting server
Nov 09 14:58:59 nas pvedaemon[92684]: starting 3 worker(s)
Nov 09 14:58:59 nas pvedaemon[92684]: worker 92685 started
Nov 09 14:58:59 nas pvedaemon[92684]: worker 92686 started
Nov 09 14:58:59 nas pvedaemon[92684]: worker 92687 started
Nov 09 14:58:59 nas systemd[1]: Started PVE API Daemon.
 
Last edited:
Thank you for your reply, I don't have another node to migrate to, is it possible to empty the node via CLI make the folder change and then reimport my VMs?

Also there is not a nodes folder in /etc/pve
 
Last edited:
Hey,

is it possible that you don't have an IP configured for the new hostname?
The pve-cluster.service is responsible for putting that folder there, and Nov 09 15:19:42 nas systemd[1]: pve-cluster.service: Failed with result 'exit-code'. indicates that that service coundn't start, a reason for that could be a misisng IP for the hostname.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!