[SOLVED] lxc not starting: "Failed mounting tmpfs onto /dev"

klauskurz

Renowned Member
Mar 8, 2013
13
5
68
Hi!

When starting a fresh ubuntu 14.04 lxc container I am getting the following error:

Code:
lxc-start -F -f /etc/pve/lxc/8002.conf --name=test02 --logfile /tmp/lxc.log --logpriority TRACE
lxc-start: conf.c: mount_autodev: 1175 No such file or directory - Failed mounting tmpfs onto /dev

lxc-start: conf.c: tmp_proc_mount: 3687 No such file or directory - failed to mount /proc in the container.
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 183 No such file or directory - failed to change apparmor profile to lxc-container-default
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 4
lxc-start: start.c: __lxc_start: 1211 failed to spawn 'test02'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

Code:
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: test02.cbscluater.com
memory: 512
net0: bridge=vmbr4,hwaddr=62:65:39:34:35:65,name=eth0,type=veth
ostype: ubuntu
rootfs: local:8002/vm-8002-disk-1.raw,size=6G
swap: 512

Code:
proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-22 (running version: 4.1-22/aca130cf)
pve-kernel-4.2.8-1-pve: 4.2.8-39
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-36
qemu-server: 4.0-64
pve-firmware: 1.1-7
libpve-common-perl: 4.0-54
libpve-access-control: 4.0-13
libpve-storage-perl: 4.0-45
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-9
pve-container: 1.0-52
pve-firewall: 2.0-22
pve-ha-manager: 1.0-25
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie

I did not find anything in the threads. Help is much appreciated.
Thank you
Klaus
 
Hi!

I did some more investigation:

I copied from the production node an working lxc container to the node with the problem mentioned above.

The result is the same error:

Code:
lxc-start -F -f /etc/pve/lxc/8001.conf --name=test01 --logfile /tmp/lxc.log --logpriority TRACE

lxc-start: conf.c: mount_autodev: 1175 No such file or directory - Failed mounting tmpfs onto /dev
lxc-start: conf.c: tmp_proc_mount: 3687 No such file or directory - failed to mount /proc in the container.
lxc-start: lsm/apparmor.c: apparmor_process_label_set: 183 No such file or directory - failed to change apparmor profile to lxc-container-default
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 4
lxc-start: start.c: __lxc_start: 1211 failed to spawn 'test01'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

Code:
arch: amd64
cpulimit: 4
cpuunits: 1024
hostname: test01.cbscluster.com
memory: 2048
net0: bridge=vmbr4,hwaddr=66:65:61:62:33:39,ip=10.18.1.250/24,name=eth0,type=veth
ostype: ubuntu
rootfs: local:8001/vm-8001-disk-1.raw,size=6G
swap: 2560

lxcfs status:

Code:
systemctl status lxcfs.service
? lxcfs.service - FUSE filesystem for LXC
   Loaded: loaded (/lib/systemd/system/lxcfs.service; enabled)
   Active: active (running) since Sat 2016-04-09 11:28:08 CEST; 17min ago
Main PID: 1767 (lxcfs)
   CGroup: /system.slice/lxcfs.service
           └─1767 /usr/bin/lxcfs /var/lib/lxcfs/

Apr 09 11:28:08 h03 lxcfs[1767]: hierarchies: 0: hugetlb
Apr 09 11:28:08 h03 lxcfs[1767]: 1: perf_event
Apr 09 11:28:08 h03 lxcfs[1767]: 2: net_cls,net_prio
Apr 09 11:28:08 h03 lxcfs[1767]: 3: freezer
Apr 09 11:28:08 h03 lxcfs[1767]: 4: devices
Apr 09 11:28:08 h03 lxcfs[1767]: 5: memory
Apr 09 11:28:08 h03 lxcfs[1767]: 6: blkio
Apr 09 11:28:08 h03 lxcfs[1767]: 7: cpu,cpuacct
Apr 09 11:28:08 h03 lxcfs[1767]: 8: cpuset
Apr 09 11:28:08 h03 lxcfs[1767]: 9: name=systemd

The production nodes, where everything is working, I did not do the update:

Code:
proxmox-ve: 4.0-19 (running kernel: 4.2.3-2-pve)
pve-manager: 4.0-57 (running version: 4.0-57/cc7c2b53)
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-19
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-36
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-12
pve-container: 1.0-20
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie

So on the node wih the older version, all works fine.

But the updated node the lxc error comes up and no container can be started.

Maybe it is something related to:
http://comments.gmane.org/gmane.linux.kernel.containers.lxc.devel/11857
 
Your command line is completely wrong. Please use "pct start CTID" to start containers, and only if that fails, use "lxc-start -n CTID -F -lTRACE -o /path/to/log/file".

lxc-start expects an lxc config file, not a PVE container config. the PVE lxc hooks expect CTID as container name, not some arbitrary string. pct start will generate an lxc config file using the containers PVE config file, so always use "pct start"! running lxc-start directly is only for debugging issues, and only makes sense after having run "pct start"!
 
Hi Fabian!
Thank you very much for the correct commandline.
This pointed to the right direction, that one of the vmbr interfaces did not come up.
When doing the correct network configuration everythink is working now.
Thank you again for the perfect service.