lxc console cleanup error

CharlesErickT

Member
Mar 15, 2017
52
8
8
32
Hello,

After rolling back a snapshot on an LXC containers the pvestatd service completly freaked out and started spitting alot of errors regarding this container

Code:
Nov 24 10:35:59 dev-proxmox-1 pvestatd[16792]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/143/ns/memory.stat' - No such file or directory
Nov 24 10:36:00 dev-proxmox-1 pvestatd[16792]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/143/ns/memory.stat' - No such file or directory
Nov 24 10:36:09 dev-proxmox-1 pvestatd[16792]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/143/ns/memory.stat' - No such file or directory
Nov 24 10:36:10 dev-proxmox-1 pvestatd[16792]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/143/ns/memory.stat' - No such file or directory
Nov 24 10:37:29 dev-proxmox-1 pvestatd[16792]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/143/ns/memory.stat' - No such file or directory
Nov 24 10:37:30 dev-proxmox-1 pvestatd[16792]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/143/ns/memory.stat' - No such file or directory
...

This cause the webui of the node to being stuck in a weird state even tough the containers were still running. See the attached screenshot

Stopping the faulty container using pct stop fixed the webUI

Any idea how to fix this ?

Here's my pveversion -v
Code:
proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-35 (running version: 5.1-35/722cc488)
pve-kernel-4.13.4-1-pve: 4.13.4-25
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.2-pve1~bpo90
 

Attachments

  • Screen Shot 2017-11-24 at 10.59.54 AM.png
    Screen Shot 2017-11-24 at 10.59.54 AM.png
    16 KB · Views: 26
Hello,

We have the same problem after an update of pve-manager, pve-container, and pve-manager. Moreover this only happens on Debian Strech containers.

My pveversion:
Code:
proxmox-ve: 4.4-99 (running kernel: 4.4.95-1-pve)
pve-manager: 4.4-20 (running version: 4.4-20/2650b7b5)
pve-kernel-4.4.95-1-pve: 4.4.95-99
pve-kernel-4.4.76-1-pve: 4.4.76-94
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-54
qemu-server: 4.0-113
pve-firmware: 1.1-11
libpve-common-perl: 4.0-96
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.9.1-2~pve4
pve-container: 1.0-103
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
ceph: 10.2.5-6~bpo8+1

Does someone have an idea ?
thank you very much

---
Regards
 
fixed pve-container package is available in pvetest
 
  • Like
Reactions: CharlesErickT
Hello, same problem here. pve-container was version 1.0-103, I just installed 1.0-104 from testing but I am still getting those messages in daemon.log. Restarting one of the concerned containers did not help. What now?



root@hp-proxmox02:/etc/apt# pveversion -v
proxmox-ve: 4.4-99 (running kernel: 4.4.21-1-pve)
pve-manager: 4.4-20 (running version: 4.4-20/2650b7b5)
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.59-1-pve: 4.4.59-87
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.4.95-1-pve: 4.4.95-99
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.83-1-pve: 4.4.83-96
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.40-1-pve: 4.4.40-82
pve-kernel-4.4.62-1-pve: 4.4.62-88
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-54
qemu-server: 4.0-113
pve-firmware: 1.1-11
libpve-common-perl: 4.0-96
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.9.1-2~pve4
pve-container: 1.0-103
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
 
your pveversion output shows you still have pve-container -103 installed. installing the package should restart pvestatd and solve the problem, if it doesn't, a manual restart of pvestatd should help.
 
Hi,

manually restarting pvestatd did nothing, still getting errors:

Code:
Dec  1 10:28:02 hp-proxmox02 pvestatd[2525]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/108/ns/memory.stat' - No such file or directory
Dec  1 10:28:05 hp-proxmox02 pvestatd[2525]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/118/ns/memory.stat' - No such file or directory
Dec  1 10:28:12 hp-proxmox02 pvestatd[2525]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/118/ns/memory.stat' - No such file or directory
Dec  1 10:28:13 hp-proxmox02 pvestatd[2525]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/118/ns/memory.stat' - No such file or directory
Dec  1 10:28:23 hp-proxmox02 pvestatd[2525]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/111/ns/memory.stat' - No such file or directory
Dec  1 10:28:23 hp-proxmox02 pvestatd[2525]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/100/ns/memory.stat' - No such file or directory

I executed pveversion earlier before updating the package. Here is how it looks now:
Code:
proxmox-ve: 4.4-99 (running kernel: 4.4.21-1-pve)
pve-manager: 4.4-20 (running version: 4.4-20/2650b7b5)
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.59-1-pve: 4.4.59-87
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.4.95-1-pve: 4.4.95-99
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.83-1-pve: 4.4.83-96
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.40-1-pve: 4.4.40-82
pve-kernel-4.4.62-1-pve: 4.4.62-88
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-54
qemu-server: 4.0-113
pve-firmware: 1.1-11
libpve-common-perl: 4.0-96
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.9.1-2~pve4
pve-container: 1.0-104
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
 
I noticed something else. It's just a couple of containers that get those error messages. Those have two direcories in /sys/fs/cgroup/memory/lxc/:

Code:
drwxr-xr-x 2 root root 0 Dec  1 09:27 100
drwxr-xr-x 3 root root 0 Dec  1 09:27 1000
drwxr-xr-x 3 root root 0 Dec  1 09:27 100-1
drwxr-xr-x 3 root root 0 Dec  1 09:27 101
drwxr-xr-x 3 root root 0 Dec  1 09:27 104
drwxr-xr-x 3 root root 0 Dec  1 09:27 105
drwxr-xr-x 3 root root 0 Dec  1 09:27 106
drwxr-xr-x 2 root root 0 Dec  1 09:27 108
drwxr-xr-x 3 root root 0 Dec  1 09:32 108-1
drwxr-xr-x 3 root root 0 Dec  1 09:27 109
drwxr-xr-x 3 root root 0 Dec  1 09:32 110
drwxr-xr-x 2 root root 0 Dec  1 09:27 1101
drwxr-xr-x 3 root root 0 Dec  1 10:35 1101-1
drwxr-xr-x 3 root root 0 Dec  1 09:27 1107
drwxr-xr-x 2 root root 0 Dec  1 09:27 111
drwxr-xr-x 3 root root 0 Dec  1 09:27 111-1
drwxr-xr-x 3 root root 0 Dec  1 09:27 112
drwxr-xr-x 3 root root 0 Dec  1 09:27 113
drwxr-xr-x 2 root root 0 Dec  1 09:27 115
drwxr-xr-x 3 root root 0 Dec  1 09:33 116
drwxr-xr-x 3 root root 0 Dec  1 09:27 117
drwxr-xr-x 2 root root 0 Dec  1 09:27 118
drwxr-xr-x 3 root root 0 Dec  1 09:45 118-1
drwxr-xr-x 3 root root 0 Dec  1 09:27 119
drwxr-xr-x 3 root root 0 Dec  1 09:27 120

The additional dirs (the -1 ones) do contain the missing file:

Code:
root@hp-proxmox02:/var/log# ls -l /sys/fs/cgroup/memory/lxc/118/ns/memory.stat
ls: cannot access /sys/fs/cgroup/memory/lxc/118/ns/memory.stat: No such file or directory
root@hp-proxmox02:/var/log# ls -l /sys/fs/cgroup/memory/lxc/118-1/ns/memory.stat
-r--r--r-- 1 root root 0 Dec  1 09:45 /sys/fs/cgroup/memory/lxc/118-1/ns/memory.stat
 
@ArnoldLayne that is a bug from march, you need to stop the affected containers, remove their cgroup directories, and then start them again (or reboot the host to clear all the cgroups). I'd recommend rebooting since you seem to not have done that for more than 6 months, which is not a good idea (think of kernel security fixes alone).
 
@ArnoldLayne that is a bug from march, you need to stop the affected containers, remove their cgroup directories, and then start them again (or reboot the host to clear all the cgroups). I'd recommend rebooting since you seem to not have done that for more than 6 months, which is not a good idea (think of kernel security fixes alone).

Ok thanks, I was planning to do that anyway.
Also, just removing the cgroup dir does not help als the container will have a cgroup dir with "-1" anyway when started again.
Well, reboot it is.
 
Hi,
I have too the same problem:

==> daemon.log <==
Jan 6 22:08:20 ns3361721 pvestatd[1660]: lxc status update error: can't open '/sys/fs/cgroup/memory/lxc/103/ns/memory.stat' - No such file or directory
Jan 6 22:08:20 ns3361721 pvestatd[1660]: lxc console cleanup error: can't open '/sys/fs/cgroup/memory/lxc/112/ns/memory.stat' - No such file or directory

Web UI does not see LXC containers state.At boot there are in unknow state, after service lxc start the are still in unknow state, but hostname are displayed after CT ID.
Containers are launched with "Start"' command, and works wells, but pve manager is broken.

I run thsees versions on package (i switched to pvetest repo)
root@xxx:~# pveversion -v
proxmox-ve: 5.1-33 (running kernel: 4.4.30-mod-std-ipv6-64)
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.13.8-2-pve: 4.13.8-28
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.10.17-4-pve: 4.10.17-24
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.10.11-1-pve: 4.10.11-9
pve-kernel-4.10.17-1-pve: 4.10.17-18
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-15
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: not correctly installed
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
openvswitch-switch: 2.7.0-2

root@ns3361721:/var/log# uname -a
Linux xxxx 4.4.30-mod-std-ipv6-64 #9 SMP Tue Nov 1 17:58:26 CET 2016 x86_64 GNU/Linux

Any idea ?


Regards,
Louis
 
Linux xxxx 4.4.30-mod-std-ipv6-64 #9 SMP Tue Nov 1 17:58:26 CET 2016 x86_64 GNU/Linux
that kernel is not a pve kernel, i would suggest using our kernel
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!