Proxmox Upgrade - LXC Containers not work

MisterFantastico

New Member
Apr 5, 2016
19
0
1
29
Hi,

yesterday i upgraded my proxmox (apt-get update && apt-get upgrade) and got this version:
https://forum.proxmox.com/threads/new-packages-in-pvetest-new-gui-new-4-4-kernel.26931/

I have a 4.4 kernel and the new webinterface ... but i havent set pvetest in sourcelist:
deb http://mirror.XXX.de/debian jessie main contrib

# security updates
deb http://security.debian.org jessie/updates main contrib

deb http://mirror.XXX.de/debian jessie-updates main

# PVE pve-no-subscription repository provided by proxmox.com, NOT recommended for production use
deb http://download.proxmox.com/debian jessie pve-no-subscription

Started Container and saw this:
root@skylake:~# lxc-attach -n 102
root@gaming:~# ps faux
bad data in /proc/uptime
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
Signal 8 (FPE) caught by ps (procps-ng version 3.3.9).
root 872 0.0ps:display.c:66: please report this bug
Gleitkomma-Ausnahme
root@gaming:~# free -m
total used free shared buffers cached
Mem: 0 0 0 0 0 0
-/+ buffers/cache: 0 0
Swap: 0 0 0
root@gaming:~# cat /proc/meminfo
root@gaming:~#

Whats wrong and how to fix it? And why did i get the beta version?

Kind regards
 
Could you please post the complete output of the following commands on the host
  • pveversion -v
  • ps faxl
and the output of the following commands in the container
  • mount
and the contents of
  • /var/lib/lxc/102/config
  • /etc/pve/lxc/102.conf
 
Thank you again for you reply.

Could you please post the complete output of the following commands on the host
  • pveversion -v
  • ps faxl
root@skylake:~# pveversion -v
proxmox-ve: 4.1-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.1-33 (running version: 4.1-33/de386c1a)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-71
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-13
pve-container: 1.0-61
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie

root@skylake:~# ps faxl
http://pastebin.com/KTtLqff4

and the output of the following commands in the container

mount
root@gaming:~# mount
/var/lib/vz/images/102/vm-102-disk-1.raw on / type ext4 (rw,relatime,data=ordered)
none on /dev type tmpfs (rw,relatime,size=100k,mode=755)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys/net type proc (rw,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
sysfs on /sys/devices/virtual/net type sysfs (rw,relatime)
sysfs on /sys/devices/virtual/net type sysfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,size=12k,mode=755)
tmpfs on /sys/fs/cgroup/cgmanager type tmpfs (rw,mode=755)
/dev/dm-0 on /proc/cpuinfo type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /proc/diskstats type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /proc/meminfo type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /proc/stat type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /proc/uptime type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /sys/fs/cgroup/blkio type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /sys/fs/cgroup/cpu,cpuacct type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /sys/fs/cgroup/cpuset type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /sys/fs/cgroup/devices type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /sys/fs/cgroup/freezer type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /sys/fs/cgroup/hugetlb type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /sys/fs/cgroup/memory type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /sys/fs/cgroup/systemd type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /sys/fs/cgroup/net_cls,net_prio type ext3 (rw,relatime,errors=remount-ro,data=ordered)
/dev/dm-0 on /sys/fs/cgroup/perf_event type ext3 (rw,relatime,errors=remount-ro,data=ordered)
devpts on /dev/console type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/tty1 type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
devpts on /dev/tty2 type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)

and the contents of

/var/lib/lxc/102/config
/etc/pve/lxc/102.conf
root@skylake:~# cat /var/lib/lxc/102/config
lxc.arch = amd64
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.monitor.unshare = 1
lxc.tty = 2
lxc.environment = TERM=linux
lxc.utsname = gaming.XXXXXX
lxc.cgroup.memory.limit_in_bytes = 8589934592
lxc.cgroup.memory.memsw.limit_in_bytes = 12884901888
lxc.cgroup.cpu.cfs_period_us = 100000
lxc.cgroup.cpu.cfs_quota_us = 800000
lxc.cgroup.cpu.shares = 1024
lxc.rootfs = /var/lib/lxc/102/rootfs
lxc.network.type = veth
lxc.network.veth.pair = veth102i0
lxc.network.hwaddr = 32:64:61:37:62:65
lxc.network.name = eth0

root@skylake:~# cat /etc/pve/lxc/102.conf
arch: amd64
cpulimit: 8
cpuunits: 1024
hostname: gaming.XXXX
memory: 8192
net0: bridge=vmbr0,gw=XXXXXXX,hwaddr=32:64:61:37:62:65,ip=XXXXXXX/24,name=eth0,type=veth
onboot: 1
ostype: debian
rootfs: local:102/vm-102-disk-1.raw,size=250G
swap: 4096
 
You don't have a running lxcfs
can you post the output of
  • "journalctl -b" (on the host)
  • a debug log of starting that container
    • stop the container
    • run "lxc-start -n 102 -F -lDEBUG -o lxc-102.log"
    • wait until the boot completes
    • run "pct shutdown 102" in another shell
    • post the content of lxc-102.log)
  • systemctl status lxcfs.service
 
Last edited:
journalctl -b
> http://pastebin.com/bLf3k7P9

lxc-102.log
> http://pastebin.com/YUk4Qwje

systemctl status lxcfs.service
>
root@skylake:~# systemctl status lxcfs.service
● lxcfs.service - FUSE filesystem for LXC
Loaded: loaded (/lib/systemd/system/lxcfs.service; enabled)
Active: failed (Result: start-limit) since Fr 2016-04-22 12:50:05 CEST; 2h 45min ago
Process: 1894 ExecStopPost=/bin/fusermount -u /var/lib/lxcfs (code=exited, status=1/FAILURE)
Process: 1891 ExecStart=/usr/bin/lxcfs /var/lib/lxcfs/ (code=exited, status=1/FAILURE)
Main PID: 1891 (code=exited, status=1/FAILURE)

Apr 22 12:50:05 skylake systemd[1]: Unit lxcfs.service entered failed state.
Apr 22 12:50:05 skylake systemd[1]: lxcfs.service holdoff time over, scheduling restart.
Apr 22 12:50:05 skylake systemd[1]: Stopping FUSE filesystem for LXC...
Apr 22 12:50:05 skylake systemd[1]: Starting FUSE filesystem for LXC...
Apr 22 12:50:05 skylake. systemd[1]: lxcfs.service start request repeated too quickly, refusing to start.
Apr 22 12:50:05 skylake. systemd[1]: Failed to start FUSE filesystem for LXC.
Apr 22 12:50:05 skylake. systemd[1]: Unit lxcfs.service entered failed state.

Thank your for you replay
 
That is really strange.. could you post the output of "ls -lha /var/lib/lxcfs" and "mount" on the host? the service fails because that mountpoint is non-empty, but it seems it does not fail hard enough to prevent containers from starting..
 
That is really strange.. could you post the output of "ls -lha /var/lib/lxcfs" and "mount" on the host? the service fails because that mountpoint is non-empty, but it seems it does not fail hard enough to prevent containers from starting..
Hello,

thank your for your help and your reply.

root@skylake:~# ls -lha /var/lib/lxcfs
insgesamt 16K
drwxr-xr-x 4 root root 4,0K Jan 1 1970 .
drwxr-xr-x 43 root root 4,0K Apr 21 12:42 ..
drwxr-xr-x 12 root root 4,0K Okt 23 2015 cgroup
dr-xr-xr-x 2 root root 4,0K Okt 23 2015 proc

root@skylake:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=4103719,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,relatime,size=6571784k,mode=755)
/dev/mapper/pve-root on / type ext3 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (rw,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd,nsroot=/)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children,nsroot=/)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct,nsroot=/)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio,nsroot=/)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory,nsroot=/)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices,nsroot=/)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer,nsroot=/)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio,nsroot=/)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event,nsroot=/)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb,nsroot=/)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids,nsroot=/)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=23,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/sda2 on /boot type ext3 (rw,relatime,data=ordered)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,relatime,data=ordered)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
cgmfs on /run/cgmanager/fs type tmpfs (rw,relatime,size=100k,mode=755)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/lxcfs/controllers type tmpfs (rw,relatime,size=100k,mode=700)
pids on /run/lxcfs/controllers/pids type cgroup (rw,relatime,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids,nsroot=/)
hugetlb on /run/lxcfs/controllers/hugetlb type cgroup (rw,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb,nsroot=/)
perf_event on /run/lxcfs/controllers/perf_event type cgroup (rw,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event,nsroot=/)
net_cls,net_prio on /run/lxcfs/controllers/net_cls,net_prio type cgroup (rw,relatime,net_cls,net_prio,nsroot=/)
freezer on /run/lxcfs/controllers/freezer type cgroup (rw,relatime,freezer,nsroot=/)
devices on /run/lxcfs/controllers/devices type cgroup (rw,relatime,devices,nsroot=/)
memory on /run/lxcfs/controllers/memory type cgroup (rw,relatime,memory,nsroot=/)
blkio on /run/lxcfs/controllers/blkio type cgroup (rw,relatime,blkio,nsroot=/)
cpu,cpuacct on /run/lxcfs/controllers/cpu,cpuacct type cgroup (rw,relatime,cpu,cpuacct,nsroot=/)
cpuset on /run/lxcfs/controllers/cpuset type cgroup (rw,relatime,cpuset,clone_children,nsroot=/)
name=systemd on /run/lxcfs/controllers/name=systemd type cgroup (rw,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd,nsroot=/)

Thanks
 
okay, you should be able to fix this by removing the contents of "/var/lib/lxcfs/" and then rebooting (you can try restarting just the lxcfs.service, maybe that is enough).

still not sure why those files are in there.. unfortunately lxcfs does not clean up the controller/cgroup mounts if the FUSE mount fails, so the checks for lxcfs when starting a container don't fail even though lxcfs is not available..
 
okay, you should be able to fix this by removing the contents of "/var/lib/lxcfs/" and then rebooting (you can try restarting just the lxcfs.service, maybe that is enough).

still not sure why those files are in there.. unfortunately lxcfs does not clean up the controller/cgroup mounts if the FUSE mount fails, so the checks for lxcfs when starting a container don't fail even though lxcfs is not available..

Thank you!!! lxcfs now works fine :) ... but why have i got the beta updates on stable release apt repo ?
 
Thank you!!! lxcfs now works fine :) ... but why have i got the beta updates on stable release apt repo ?

thanks for feedback.

as always, beta packages with no issue are released to the stable repos (after testing period).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!