Allow notes/annotation on KVM Attached disks.
We use multiple disks as the root of various data pools, it would be handy if the WebUI allows for notes or some form of annotation so we can document the purpose of each disk.
It might have been the graphite stats metric sender, we don't have UDP enabled on the system anymore but had forgotten to remove it from pve/status.cfg
I have now removed it and will see if that solved the problem.
So I've solved the issue, maybe...
There were a load of pvestatd processes sat there hung; I killed them all and did pvestatd start and everythigns come back.
We get the occasional segfault with Ceph but that shouldn't cause issues with PVE.
Jul 28 14:17:57 grid-pve-04 kernel: [ 5126.518000] msgr-worker-2[56919]: segfault at ffffffffffffffe8 ip 00007fca49dc0f4f sp 00007fca45dcfc30 error 5 in libceph-common.so.0[7fca49a6f000+5e0000]
Jul 28 14:17:57...
All the PVE Nodes show this in the journalctl output (I have since disabled a script that an API polling script we run incase we're DDOS'ing ourselves)
Jul 29 11:58:43 grid-pve-04 pvestatd[3033]: can't lock file '/var/log/pve/tasks/.active.lock' - can't open file - Too many open files
Jul 29...
A reboot (Rebooted 1,2 and 4) temporarily fixed it and after ~30-60 minutes they all dropped out and refused to migrate and open the shell again.
root@grid-pve-01:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-10 (running version: 6.2-10/a20769ed)...
Hi,
I recently updated our 4 node cluster to 6.2-10 and now the WebUI is broken.
I can login but I can't get VM Names, Host Status or CLI.
There's also a load of Error 501's show in the browser web console.
Other functionality seems to work though.
Checking with htop via ssh pmxcfs is taking...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.