htop show incorrect data inside LXC Container

starnetwork

Renowned Member
Dec 8, 2009
422
8
83
Hi,
I have node with 40 cores
and inside this node I have LXC Container with 8 cores
now, when I do htop inside the container, it's show me 40 CPU Cores instead of 8

* in Proxmox v3.x and OpenVZ it's show the right Cores

any suggestion how to solve it in Proxmox v5?

Regards,
 
Hi,
Using Proxmox v5
5.0-23/af4267bf

root@server1:~# pveversion
pve-manager/5.0-23/af4267bf (running kernel: 4.10.17-1-pve)

root@server1:~# apt-get update && apt-get dist-upgrade
Get:1 http://security.debian.org stretch/updates InRelease [62.9 kB]
Ign:2 http://ftp.debian.org/debian stretch InRelease
Hit:3 http://download.proxmox.com/debian/pve stretch InRelease
Hit:4 http://ftp.debian.org/debian stretch Release
Get:6 http://security.debian.org stretch/updates/main amd64 Packages [119 kB]
Get:7 http://security.debian.org stretch/updates/main Translation-en [49.2 kB]
Fetched 231 kB in 0s (296 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
 
this should work, htop should not report all CPUs, only the assigened ones. post your container config.

> pct config CTID
 
Please:
root@server1:~# pct config 206
arch: amd64
cpulimit: 8
cpuunits: 1024
hostname: automatic1.domain.co.il
memory: 8192
net0: name=eth0,bridge=vmbr0,gw=10.0.0.1,hwaddr=0A:2B:65:14:E4:94,ip=10.0.0.51/32,rate=5,type=veth
onboot: 1
ostype: centos
rootfs: local-lvm:vm-206-disk-1,size=100G
swap: 16384
 
looks ok. can you send a screenshot from your htop output?
 
Same problem here on latest pve4.4:
proxmox-ve: 4.4-92 (running kernel: 4.4.67-1-pve)
pve-manager: 4.4-15 (running version: 4.4-15/7599e35a)
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.4.62-1-pve: 4.4.62-88
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-52
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-95
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-101
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80

cat /etc/pve/lxc/106.conf
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: deb-test
memory: 512
net0: name=eth0,bridge=vmbr1000,hwaddr=3A:66:32:39:38:65,ip=dhcp,type=veth
ostype: debian
rootfs: omnios_nfs:106/vm-106-disk-2.raw,size=8G
swap: 512

From CT:
grep -c processor /proc/cpuinfo
4

From host:
grep -c processor /proc/cpuinfo
4
 
FWIW, mine is working correctly.

HOST:
5.0-23/af4267bf
IVWXc1K.jpg

Code:
grep -c processor /proc/cpuinfo
8

CONTAINER:
Code:
arch: amd64
cores: 4
hostname: nick-gdrive
memory: 2048
mp0: /nickarray,mp=/nickarray
net0: name=eth0,bridge=vmbr0,gw=10.0.0.254,hwaddr=7A:40:78:24:92:09,ip=10.0.0.12/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: ssd:104/vm-104-disk-1.raw,size=10G
swap: 0
lxc.aa_profile: unconfined
lxc.autodev: 1
lxc.hook.autodev: sh -c "mknod -m 0666 ${LXC_ROOTFS_MOUNT}/dev/fuse c 10 229"

lgSQZUs.jpg


Code:
grep -c processor /proc/cpuinfo
4
 
I think I have solved the problem:
Not working lxc which is created long time ago:
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: deb-test
memory: 512
net0: name=eth0,bridge=vmbr1000,hwaddr=3A:66:32:39:38:65,ip=dhcp,type=veth
ostype: debian
rootfs: omnios_nfs:106/vm-106-disk-2.raw,size=8G
swap: 512

Newly created lxc:
arch: amd64
cores: 1
hostname: deb1-test
memory: 512
net0: name=eth0,bridge=vmbr1100,hwaddr=26:34:F7:4C:B5:28,ip=dhcp,type=veth
ostype: debian
rootfs: omnios_ib_nfs:163/vm-163-disk-1.raw,size=8G
swap: 512

Difference is that old config uses cpulimit and cpuunits while new uses cores.
Is this where the command pve-update-lxc-config should come into play?
 
  • Like
Reactions: grin
thanks for that update,
any suggestion how to fix it with converted from OpenVZ / Old containers?
 
Go to the resources tab and double-click on Cores:
Cores: Number of cores wanted
CPU limit: 0 (unlimited) <-- This is default setting
CPU units: 1024 <-- This is default setting
 
Hello,

I know this thread is old and hasn't been updated since 2017, but I need to report that I am still experiencing the same strange setup even in 2020, so it's worth on giving the forum a shout on this and see if anybody else has this issue.

I am using the following proxmox version:

proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

In regards to previous posts, even if i try to create a new container, privileged or unprivileged with a ubuntu 18 LTS or debian image type, in htop i can see the whole cpus which are present in the bare metal machine.

I even have tried to play with CPU limit (set or unset) but it doesn't seem to impact the inside cli output and cpu count from the container.

Is there any known way that in a CT the user should only see the configured CPUs and not expose the full CPUs number from the bare metal host ?

Thanks,
Alex
 
In regards to previous posts, even if i try to create a new container, privileged or unprivileged with a ubuntu 18 LTS or debian image type, in htop i can see the whole cpus which are present in the bare metal machine.

I even have tried to play with CPU limit (set or unset) but it doesn't seem to impact the inside cli output and cpu count from the container.

Is there any known way that in a CT the user should only see the configured CPUs and not expose the full CPUs number from the bare metal host ?
Not something I see here.

Fedora privileged container 1 core (host has 8 cores)Fedore_priviledge_container.png
Debian unprivileged container 1 core (host has 8 cores)
debian_unpriviledge_container.png
 
Thank you for replying back on my post.

I did check further on the system to see if I have any other failed services in systemctl and I could find that lxcsfs.service was with a failed status due to the /var/lib/lxcfs/ not being empty once the node started.

With all CT stopped i did a rm -rf /var/lib/lxcfs/* followed by systemctl restart lxcfs.service which didn't report any more errors.

(* this was due to the fact that i'm running prox on USB sticks of 60gigs and I had some gone bad, replaced with a new one and put the previously saved image on it to carry on the work).
(* I do recommend anyone running on USB sticks they save the prox installed image on a different drive/server just for sanity of have a clone of the flash/usb replicated on another spare drive + do you backups with borg)

Afterwards, I started some CTs and checked in htop and only visible CPUs were those allocated for the CT (although if the user does a lscpu - it can see exactly all CPUs on the bare metal, but i guess that can't be masked).

Another point worth mentioning, although I can't really understand why this isn't a proxmox default is setting kernel.dmesg_restrict = 1 in systcl.conf on the bare metal hypervisor. This will instruct the kernel not to pass along any information from the bare metal server inside the CT, like dmesg syslog messages or the mapping of CT's lvm structure listed in lsblk).

Regards,
Alex
 
Another point worth mentioning, although I can't really understand why this isn't a proxmox default is setting kernel.dmesg_restrict = 1 in systcl.conf on the bare metal hypervisor. This will instruct the kernel not to pass along any information from the bare metal server inside the CT, like dmesg syslog messages or the mapping of CT's lvm structure listed in lsblk).
Looks like a go thing to make a feature request for on https://bugzilla.proxmox.com/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!