[SOLVED] Web interface showing questionmarks for all VMs / LXC

D0peX

Member
May 5, 2017
32
0
11
Hi Guys

Got a strange thing happening on my sever. After the upgrade to PVE6, the web interface shows all the VMs / LXC containers as unknown status.
Here i'm asking if there is a known bug / similar.
The status window does show active CPU usage and such. History graphs are not working however. All VMs and containers are running fine though.

Some things to notice:
- after reboot works fine
- some time later it stops working, when exactly is unknown.
- this server was in a cluser, however it does have quorum (this worked fine in PVEv5)
- when i click on another guest, it will not load the stats as on screen #2

Screens:
1574520982047.png

1574521005777.png

Code:
root@B7HF642:~# pveversion

pve-manager/6.0-12/0a603350 (running kernel: 5.0.21-5-pve)


root@B7HF642:~# pvecm status

Quorum information

------------------

Date:             Sat Nov 23 15:51:19 2019

Quorum provider:  corosync_votequorum

Nodes:            1

Node ID:          0x00000001

Ring ID:          1.18

Quorate:          Yes


Votequorum information

----------------------

Expected votes:   1

Highest expected: 1

Total votes:      1

Quorum:           1

Flags:            Quorate


Membership information

----------------------

    Nodeid      Votes Name

0x00000001          1 10.10.30.31 (local)


root@B7HF642:~# pvedaemon status

running

root@B7HF642:~# pveproxy status

running

I do see the following in the journal:

Code:
-- A start job for unit pvesr.service has finished successfully.
--
-- The job identifier is 930301.
Nov 23 16:00:05 B7HF642 pvestatd[1728]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory
Nov 23 16:00:05 B7HF642 pvestatd[1728]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory
Nov 23 16:00:15 B7HF642 pvestatd[1728]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory
Nov 23 16:00:16 B7HF642 pvestatd[1728]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory
Nov 23 16:00:25 B7HF642 pvestatd[1728]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory
Nov 23 16:00:26 B7HF642 pvestatd[1728]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory
Nov 23 16:00:35 B7HF642 pvestatd[1728]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory
Nov 23 16:00:35 B7HF642 pvestatd[1728]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory
Nov 23 16:00:45 B7HF642 pvestatd[1728]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory
Nov 23 16:00:46 B7HF642 pvestatd[1728]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory
Nov 23 16:00:55 B7HF642 pvestatd[1728]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory
Nov 23 16:00:55 B7HF642 pvestatd[1728]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/303/ns/memory.stat' - No such file or directory

Thanks in advance!
 
Last edited:
Hello,

Have You tried to Passthrough some device to LXC?

I have similliar error when I have tried to Passthrough Vega GPU from Athlon 200ge to My Plex LXC. After disabling it (have commented in conf) the problem has gone.
 
Seems like the Journal error was relevant. `pvestatd` was trying ~12x per minute (every 5 seconds or so) to clean lxc container 303... This probably caused the webUI to become unresponsive to other input such as requesting data from lxc/kvm. Shutting down the vm caused it to seize and bring back all statuses on the vms/containers.

Thank you for all of your input /s

@ devs
might be relevant to check out why this occurs. I can't seem to reproduce it.
This would be a catastrophe for someone that manages a way bigger cluster with an unresponsive UI.

Mind you, this error does not appear in the webUI console!
Could be an option for failing jobs that occur frequently to output to the UI.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!