[SOLVED] Cannot Access Proxmox Web UI After Restart

nyxynyx

New Member
Oct 1, 2020
4
0
1
39
Proxmox has been working fine for about 3 months and had restarted several times properly.

However, I use the Proxmox webpage added a new DNS server 192.168.1.1 and edited the hostname. After applying these changes and restarting, I am no longer able to access the proxmox webpage at https://192.168.1.2:8006

I am able to ping and SSH into the proxmox server.

What should I do to recover the web UI?

systemctl status pveproxy.service
Code:
# systemctl status pveproxy.service
● pveproxy.service - PVE API Proxy Server
   Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2020-12-30 10:48:45 EST; 6s ago
  Process: 2537 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited, status=111)
  Process: 2538 ExecStart=/usr/bin/pveproxy start (code=exited, status=255/EXCEPTION)

Dec 30 10:48:45 proxmox systemd[1]: pveproxy.service: Service RestartSec=100ms expired, scheduling restart.
Dec 30 10:48:45 proxmox systemd[1]: pveproxy.service: Scheduled restart job, restart counter is at 5.
Dec 30 10:48:45 proxmox systemd[1]: Stopped PVE API Proxy Server.
Dec 30 10:48:45 proxmox systemd[1]: pveproxy.service: Start request repeated too quickly.
Dec 30 10:48:45 proxmox systemd[1]: pveproxy.service: Failed with result 'exit-code'.
Dec 30 10:48:45 proxmox systemd[1]: Failed to start PVE API Proxy Server.

systemctl restart pveproxy
Code:
# systemctl restart pveproxy
Job for pveproxy.service failed because the control process exited with error code.
See "systemctl status pveproxy.service" and "journalctl -xe" for details.

systemctl status pve-cluster
Code:
# systemctl status pve-cluster
● pve-cluster.service - The Proxmox VE cluster filesystem
   Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2020-12-30 11:30:18 EST; 3min 25s ago

Dec 30 11:30:19 proxmox systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Dec 30 11:30:20 proxmox systemd[1]: pve-cluster.service: Start request repeated too quickly.
Dec 30 11:30:20 proxmox systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Dec 30 11:30:20 proxmox systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Dec 30 11:30:21 proxmox systemd[1]: pve-cluster.service: Start request repeated too quickly.
Dec 30 11:30:21 proxmox systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Dec 30 11:30:21 proxmox systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Dec 30 11:30:22 proxmox systemd[1]: pve-cluster.service: Start request repeated too quickly.
Dec 30 11:30:22 proxmox systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Dec 30 11:30:22 proxmox systemd[1]: Failed to start The Proxmox VE cluster filesystem.

systemctl restart pve-cluster
Code:
# systemctl restart pve-cluster
Job for pve-cluster.service failed because the control process exited with error code.
See "systemctl status pve-cluster.service" and "journalctl -xe" for details.

systemctl status pve-cluster.service
Code:
# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
   Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2020-12-30 11:34:20 EST; 15s ago
  Process: 20341 ExecStart=/usr/bin/pmxcfs (code=exited, status=255/EXCEPTION)

Dec 30 11:34:20 proxmox systemd[1]: pve-cluster.service: Service RestartSec=100ms expired, scheduling restart.
Dec 30 11:34:20 proxmox systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 5.
Dec 30 11:34:20 proxmox systemd[1]: Stopped The Proxmox VE cluster filesystem.
Dec 30 11:34:20 proxmox systemd[1]: pve-cluster.service: Start request repeated too quickly.
Dec 30 11:34:20 proxmox systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Dec 30 11:34:20 proxmox systemd[1]: Failed to start The Proxmox VE cluster filesystem.

ip a s
Code:
# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp39s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 2c:f0:5d:60:71:b5 brd ff:ff:ff:ff:ff:ff
3: wlo1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 8c:c6:81:f2:28:85 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2c:f0:5d:60:71:b5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.2/24 brd 192.168.1.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::2ef0:5dff:fe60:71b5/64 scope link
       valid_lft forever preferred_lft forever

dpkg -S pvesh
Code:
# dpkg -S pvesh
pve-docs: /usr/share/pve-docs/chapter-pvesh.html
pve-manager: /usr/bin/pvesh
pve-manager: /usr/share/zsh/vendor-completions/_pvesh
pve-docs: /usr/share/pve-docs/pvesh-plain.html
pve-manager: /usr/share/bash-completion/completions/pvesh
pve-manager: /usr/share/perl5/PVE/CLI/pvesh.pm
pve-manager: /usr/share/man/man1/pvesh.1.gz
pve-docs: /usr/share/pve-docs/pvesh.1.html

pveversion -v
Code:
# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
Last edited:
However, I use the Proxmox webpage added a new DNS server 192.168.1.1 and edited the hostname.
If you change the hostname of your PVE node (or it's IP) you need to make sure that you have a correct entry in '/etc/hosts'

Put differently: `uname -n` should be pingable - and should resolve to an IP configured on your PVE node
Check the output of:
Code:
uname -n
hostname -f
ping -c 3 $(uname -n)

on another node - please consider upgrading to the latest available version - PVE 6.3 was released a bit over a month ago (and we had quite a few updates afterwards as well)

I hope this helps!