[SOLVED] WebUI Broken after update to 6.2-10 (Error 501)

adamboutcher

Member
Jul 28, 2020
11
0
6
35
Hi,

I recently updated our 4 node cluster to 6.2-10 and now the WebUI is broken.
I can login but I can't get VM Names, Host Status or CLI.
There's also a load of Error 501's show in the browser web console.

Other functionality seems to work though.

Checking with htop via ssh pmxcfs is taking up a lot of CPU but this might be normnal?

pvesh get /nodes/ via ssh also shows no CPU/Memory usage.

If this isnt a bug, can somebody point me in the right direction?
 

Attachments

  • proxmox-1.png
    proxmox-1.png
    29.8 KB · Views: 35
  • proxmox-2.png
    proxmox-2.png
    8.8 KB · Views: 39
  • proxmox-3.png
    proxmox-3.png
    103 KB · Views: 36
Hi,

did you restart nodes after upgrade? - if so please cold-restart as well, if not work please post output of pveversion -v
 
A reboot (Rebooted 1,2 and 4) temporarily fixed it and after ~30-60 minutes they all dropped out and refused to migrate and open the shell again.

Bash:
root@grid-pve-01:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-10 (running version: 6.2-10/a20769ed)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-1
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-9
pve-cluster: 6.1-8
pve-container: 3.1-12
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-11
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-11
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

Bash:
root@grid-pve-02:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-10 (running version: 6.2-10/a20769ed)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-3-pve: 5.3.13-3
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-1
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-9
pve-cluster: 6.1-8
pve-container: 3.1-12
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-11
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-11
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

Bash:
root@grid-pve-03:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-10 (running version: 6.2-10/a20769ed)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-3-pve: 5.3.13-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-1
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-9
pve-cluster: 6.1-8
pve-container: 3.1-12
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-11
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-11
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

Code:
root@grid-pve-04:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-10 (running version: 6.2-10/a20769ed)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-3-pve: 5.3.13-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-1
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-9
pve-cluster: 6.1-8
pve-container: 3.1-12
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-11
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-11
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
 
Hi again,

Can you see if there any error in syslog/journalctl?

also check if all proxmox services are running by this command
Bash:
pvesh get /nodes/{node}/services/

By GUI Datacenter -> NodeName -> System
 
Bash:
root@grid-pve-01:~# pvesh get /nodes/grid-pve-01/services --noborder
desc                                         name              service           state
Corosync Cluster Engine                      corosync          corosync          running
Kernel Samepage Merging (KSM) Tuning Daemon  ksmtuned          ksmtuned          running
Network Time Synchronization                 systemd-timesyncd systemd-timesyncd running
OpenBSD Secure Shell server                  sshd              sshd              running
PVE API Daemon                               pvedaemon         pvedaemon         running
PVE API Proxy Server                         pveproxy          pveproxy          running
PVE Cluster HA Resource Manager Daemon       pve-ha-crm        pve-ha-crm        running
PVE Local HA Resource Manager Daemon         pve-ha-lrm        pve-ha-lrm        running
PVE SPICE Proxy Server                       spiceproxy        spiceproxy        running
PVE Status Daemon                            pvestatd          pvestatd          running
Postfix Mail Transport Agent (instance -)    postfix           postfix           running
Proxmox VE firewall                          pve-firewall      pve-firewall      running
Proxmox VE firewall logger                   pvefw-logger      pvefw-logger      running
Regular background program processing daemon cron              cron              running
System Logging Service                       syslog            syslog            running
The Proxmox VE cluster filesystem            pve-cluster       pve-cluster       running

Bash:
root@grid-pve-01:~# pvesh get /nodes/grid-pve-02/services --noborder
desc                                         name              service           state
Corosync Cluster Engine                      corosync          corosync          running
Kernel Samepage Merging (KSM) Tuning Daemon  ksmtuned          ksmtuned          running
Network Time Synchronization                 systemd-timesyncd systemd-timesyncd running
OpenBSD Secure Shell server                  sshd              sshd              running
PVE API Daemon                               pvedaemon         pvedaemon         running
PVE API Proxy Server                         pveproxy          pveproxy          running
PVE Cluster HA Resource Manager Daemon       pve-ha-crm        pve-ha-crm        running
PVE Local HA Resource Manager Daemon         pve-ha-lrm        pve-ha-lrm        running
PVE SPICE Proxy Server                       spiceproxy        spiceproxy        running
PVE Status Daemon                            pvestatd          pvestatd          running
Postfix Mail Transport Agent (instance -)    postfix           postfix           running
Proxmox VE firewall                          pve-firewall      pve-firewall      running
Proxmox VE firewall logger                   pvefw-logger      pvefw-logger      running
Regular background program processing daemon cron              cron              running
System Logging Service                       syslog            syslog            running
The Proxmox VE cluster filesystem            pve-cluster       pve-cluster       running
 
Bash:
root@grid-pve-01:~# pvesh get /nodes/grid-pve-03/services --noborder
desc                                         name              service           state
Corosync Cluster Engine                      corosync          corosync          runnin
Kernel Samepage Merging (KSM) Tuning Daemon  ksmtuned          ksmtuned          running
Network Time Synchronization                 systemd-timesyncd systemd-timesyncd running
OpenBSD Secure Shell server                  sshd              sshd              running
PVE API Daemon                               pvedaemon         pvedaemon         running
PVE API Proxy Server                         pveproxy          pveproxy          running
PVE Cluster HA Resource Manager Daemon       pve-ha-crm        pve-ha-crm        running
PVE Local HA Resource Manager Daemon         pve-ha-lrm        pve-ha-lrm        running
PVE SPICE Proxy Server                       spiceproxy        spiceproxy        running
PVE Status Daemon                            pvestatd          pvestatd          running
Postfix Mail Transport Agent (instance -)    postfix           postfix           running
Proxmox VE firewall                          pve-firewall      pve-firewall      running
Proxmox VE firewall logger                   pvefw-logger      pvefw-logger      running
Regular background program processing daemon cron              cron              running
System Logging Service                       syslog            syslog            running
The Proxmox VE cluster filesystem            pve-cluster       pve-cluster       running

Bash:
root@grid-pve-01:~# pvesh get /nodes/grid-pve-04/services --noborder
desc                                         name              service           state
Corosync Cluster Engine                      corosync          corosync          running
Kernel Samepage Merging (KSM) Tuning Daemon  ksmtuned          ksmtuned          running
Network Time Synchronization                 systemd-timesyncd systemd-timesyncd running
OpenBSD Secure Shell server                  sshd              sshd              running
PVE API Daemon                               pvedaemon         pvedaemon         running
PVE API Proxy Server                         pveproxy          pveproxy          running
PVE Cluster HA Resource Manager Daemon       pve-ha-crm        pve-ha-crm        running
PVE Local HA Resource Manager Daemon         pve-ha-lrm        pve-ha-lrm        running
PVE SPICE Proxy Server                       spiceproxy        spiceproxy        running
PVE Status Daemon                            pvestatd          pvestatd          running
Postfix Mail Transport Agent (instance -)    postfix           postfix           running
Proxmox VE firewall                          pve-firewall      pve-firewall      running
Proxmox VE firewall logger                   pvefw-logger      pvefw-logger      running
Regular background program processing daemon cron              cron              running
System Logging Service                       syslog            syslog            running
The Proxmox VE cluster filesystem            pve-cluster       pve-cluster       running
 
All the PVE Nodes show this in the journalctl output (I have since disabled a script that an API polling script we run incase we're DDOS'ing ourselves)

Code:
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: can't lock file '/var/log/pve/tasks/.active.lock' - can't open file - Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: Use of uninitialized value $line in pattern match (m//) at /usr/share/perl5/PVE/ProcFSTools.pm line 128.
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: Use of uninitialized value in subtraction (-) at /usr/share/perl5/PVE/ProcFSTools.pm line 171.
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: Use of uninitialized value in subtraction (-) at /usr/share/perl5/PVE/ProcFSTools.pm line 171.
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: Use of uninitialized value in subtraction (-) at /usr/share/perl5/PVE/ProcFSTools.pm line 175.
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: Use of uninitialized value in subtraction (-) at /usr/share/perl5/PVE/ProcFSTools.pm line 175.
Jul 29 11:58:43 grid-pve-04 pvestatd[644586]: can't lock file '/var/log/pve/tasks/.active.lock' - got timeout
Jul 29 11:58:43 grid-pve-04 pvestatd[644586]: status update error: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[644586]: status update time (10.011 seconds)
Jul 29 11:58:43 grid-pve-04 pvestatd[545657]: ipcc_send_rec[2] failed: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[644586]: ipcc_send_rec[1] failed: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[573313]: can't lock file '/var/log/pve/tasks/.active.lock' - got timeout
Jul 29 11:58:43 grid-pve-04 pvestatd[573313]: status update error: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[573313]: status update time (10.011 seconds)
Jul 29 11:58:43 grid-pve-04 pvestatd[598785]: ipcc_send_rec[2] failed: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: node status update error: Unrecognised protocol udp at /usr/share/perl5/PVE/Status/Graphite.pm line 103.
Jul 29 11:58:43 grid-pve-04 pvestatd[573313]: ipcc_send_rec[1] failed: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[545657]: ipcc_send_rec[3] failed: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[644586]: ipcc_send_rec[2] failed: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[580130]: can't lock file '/var/log/pve/tasks/.active.lock' - got timeout
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: qemu status update error: Unrecognised protocol udp at /usr/share/perl5/PVE/Status/Graphite.pm line 103.
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: lxc status update error: Unrecognised protocol udp at /usr/share/perl5/PVE/Status/Graphite.pm line 103.
Jul 29 11:58:43 grid-pve-04 pvestatd[598785]: ipcc_send_rec[3] failed: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: Use of uninitialized value $max_cpuid in addition (+) at /usr/share/perl5/PVE/Service/pvestatd.pm line 272
Jul 29 11:58:43 grid-pve-04 pvestatd[580130]: status update error: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[580130]: status update time (10.009 seconds)
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: can't open '/etc/pve/priv/ceph/grid-pve-ceph.keyring' - Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: storage 'monstore' is not online
Jul 29 11:58:43 grid-pve-04 pvestatd[580130]: ipcc_send_rec[1] failed: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[573313]: ipcc_send_rec[2] failed: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[644586]: ipcc_send_rec[3] failed: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[580130]: ipcc_send_rec[2] failed: Too many open files
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: Can't call method "reader" on an undefined value at /usr/share/perl5/PVE/Tools.pm line 970.
 
We get the occasional segfault with Ceph but that shouldn't cause issues with PVE.

Code:
Jul 28 14:17:57 grid-pve-04 kernel: [ 5126.518000] msgr-worker-2[56919]: segfault at ffffffffffffffe8 ip 00007fca49dc0f4f sp 00007fca45dcfc30 error 5 in libceph-common.so.0[7fca49a6f000+5e0000]
Jul 28 14:17:57 grid-pve-04 kernel: [ 5126.518012] Code: e3 0f 8e a4 00 00 00 49 63 c4 48 8d 14 40 48 8b 85 b8 00 00 00 48 8d 1c d0 48 8b 45 00 80 b8 bb 00 00 00 13 0f 87 e1 02 00 00 <8b> 13 44 39 ea 74 4f 48 8b bd d0 00 00 00 44 89 e9 44 89 e6 48 8b
Jul 28 14:18:06 grid-pve-04 kernel: [ 5135.620896] msgr-worker-2[57253]: segfault at ffffffffffffffe8 ip 00007fca49dc0f4f sp 00007fca45dcfc30 error 5 in libceph-common.so.0[7fca49a6f000+5e0000]
Jul 28 14:18:06 grid-pve-04 kernel: [ 5135.620910] Code: e3 0f 8e a4 00 00 00 49 63 c4 48 8d 14 40 48 8b 85 b8 00 00 00 48 8d 1c d0 48 8b 45 00 80 b8 bb 00 00 00 13 0f 87 e1 02 00 00 <8b> 13 44 39 ea 74 4f 48 8b bd d0 00 00 00 44 89 e9 44 89 e6 48 8b
Jul 28 14:18:16 grid-pve-04 kernel: [ 5145.717765] msgr-worker-2[57278]: segfault at ffffffffffffffe8 ip 00007fca49dc0f4f sp 00007fca45dcfc30 error 5 in libceph-common.so.0[7fca49a6f000+5e0000]
Jul 28 14:18:16 grid-pve-04 kernel: [ 5145.717778] Code: e3 0f 8e a4 00 00 00 49 63 c4 48 8d 14 40 48 8b 85 b8 00 00 00 48 8d 1c d0 48 8b 45 00 80 b8 bb 00 00 00 13 0f 87 e1 02 00 00 <8b> 13 44 39 ea 74 4f 48 8b bd d0 00 00 00 44 89 e9 44 89 e6 48 8b
 
So I've solved the issue, maybe...

There were a load of pvestatd processes sat there hung; I killed them all and did pvestatd start and everythigns come back.
 
It might have been the graphite stats metric sender, we don't have UDP enabled on the system anymore but had forgotten to remove it from pve/status.cfg

I have now removed it and will see if that solved the problem.
 
Last edited:
Great!

Please mark the thread as [SOLVED] to help other people who have the same problem Thanks!

Have a nice day :)
 
I had the same problem. Couldn't remember why I configured the /etc/pve/status.cfg. In our case the server entry was configured with a totally wrong server IP. hmmm....

I deleted the /etc/pve/status.cfg und killed all the pvestatsd and gestarted the pvestatsd again.

Look fine now.
 
Pessoa estou com este erro abixo a pagina a GUI nao carrega, mas tenho acesso por SSH.

Lynx:~# pvesh get /nodes/{node}/services/
ipcc_send_rec[1] falhou: Conexão recusada
ipcc_send_rec[2] falhou: Conexão recusada
ipcc_send_rec[3] falhou: Conexão recusada
Não foi possível carregar a lista de controle de acesso: Conexão recusada
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!