run "top" command failed in container

helyho

New Member
Aug 12, 2020
2
0
1
42
when is run top and it return:
Code:
top: failed /proc/stat read

run "uptime" it return:
Code:
bad data in /proc/uptime
bad data in /proc/loadavg

did i missing some configuration?

when i boot machine i got " Failed to start FUSE filesystem for LXC", is it cause this problem>

1597251080870.png



101.conf :
Code:
arch: amd64
cores: 16
hostname: Ubuntu
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=172.30.0.151,hwaddr=22:93:0C:6D:96:FC,ip=172.30.0.62/24,type=veth
ostype: ubuntu
rootfs: vm:vm-101-disk-0,acl=0,size=32G
swap: 4096


/var/lib/lxc/101/config:
Code:
lxc.cgroup.relative = 0
lxc.cgroup.dir.monitor = lxc.monitor/101
lxc.cgroup.dir.container = lxc/101
lxc.cgroup.dir.container.inner = ns
lxc.arch = amd64
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.apparmor.profile = generated
lxc.apparmor.raw = deny mount -> /proc/,
lxc.apparmor.raw = deny mount -> /sys/,
lxc.monitor.unshare = 1
lxc.tty.max = 2
lxc.environment = TERM=linux
lxc.uts.name = Ubuntu
lxc.cgroup.memory.limit_in_bytes = 4294967296
lxc.cgroup.memory.memsw.limit_in_bytes = 8589934592
lxc.cgroup.cpu.shares = 1024
lxc.rootfs.path = /var/lib/lxc/101/rootfs
lxc.net.0.type = veth
lxc.net.0.veth.pair = veth101i0
lxc.net.0.hwaddr = 22:34:0C:45:96:89
lxc.net.0.name = eth0
lxc.net.0.script.up = /usr/share/lxc/lxcnetaddbr
lxc.cgroup.cpuset.cpus = 0-5

mount |grep proc
Code:
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys/net type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
/dev/mapper/pve-root on /proc/cpuinfo type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/pve-root on /proc/diskstats type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/pve-root on /proc/loadavg type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/pve-root on /proc/meminfo type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/pve-root on /proc/stat type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/pve-root on /proc/swaps type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/pve-root on /proc/uptime type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)

pveversion
Code:
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!