unable to start 3 of my containers proxmox 4.2

carbonjoker

New Member
Oct 3, 2014
20
0
1
Hi all :D

I recently updated my server to the newest version of proxmox however after i restarted the server i was unable to boot 3 of the containers. I am not sure exsactly why they are not booting however i do have the logs to go with them.

the container itself returns a task error as follows:

TASK ERROR: command 'lxc-start -n 108' failed: exit code 1

and i tried to run this command to see if it gave more information, however i am not sure what it means:

/etc/pve/lxc# lxc-start -n 108 -F -o=/test.log

readline() on closed filehandle $fd at /usr/share/lxc/hooks/lxc-pve-autodev-hook line 32.
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 4
lxc-start: start.c: __lxc_start: 1211 failed to spawn '108'
umount: /var/lib/lxc/108/rootfs: target is busy
(In some cases useful info about processes that
use the device is found by lsof(8) or fuser(1).)
command 'umount --recursive /var/lib/lxc/108/rootfs' failed: exit code 32
lxc-start: conf.c: run_buffer: 342 Script exited with status 32
lxc-start: start.c: lxc_fini: 517 failed to run post-stop hooks for container '108'.
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

all 3 containers are like this and i am not sure how to fix it, does anyone have any ideas?

thanks in advance :D
 
Please can you post the container configuration file?

arch: amd64
cpulimit: 4
cpuunits: 1024
hostname: toll
memory: 4096
mp0: /mnt/hdd2/shares/ent,mp=/mnt/ent
mp1: /mnt/hdd2/downloads,mp=/mnt/downloads
net0: bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=2A:C6:6B:3E:AB:68,ip=192.168.1.108/24,name=eth0,type=veth
ostype: debian
rootfs: hdd1:108/vm-108-disk-1.raw,size=4G
swap: 8192
 
I can't see how that can happen - need to talk with wolfgang on monday...

Maybe you can do further test to find out whats wrong, for example can you start new CTs?
 
Just as a bit more information on one of the boot logs it says that there is no /sbin/init file in the container.
After making a copy of a backup and extracting it i found that the /sbin/init does not exist in the container. Could this be causing the failed boot?

Here is a copy of the boot log:

lxc-start 1462095812.200 ERROR lxc_start - start.c:start:1296 - No such file or directory - failed to exec /sbin/init
lxc-start 1462095812.201 ERROR lxc_sync - sync.c:__sync_wait:51 - invalid sequence number 1. expected 4
lxc-start 1462095812.201 ERROR lxc_start - start.c:__lxc_start:1211 - failed to spawn '108'
lxc-start 1462095812.678 ERROR lxc_conf - conf.c:run_buffer:342 - Script exited with status 32
lxc-start 1462095812.678 ERROR lxc_start - start.c:lxc_fini:517 - failed to run post-stop hooks for container '108'.
lxc-start 1462095812.680 ERROR lxc_start_ui - lxc_start.c:main:344 - The container failed to start.
lxc-start 1462095812.680 ERROR lxc_start_ui - lxc_start.c:main:348 - Additional information can be obtained by setting the --logfile and --logpriority options.
lxc-start 1462096185.341 INFO lxc_start_ui - lxc_start.c:main:264 - using rcfile /var/lib/lxc/108/config
lxc-start 1462096185.341 WARN lxc_confile - confile.c:config_pivotdir:1817 - lxc.pivotdir is ignored. It will soon become an error.
lxc-start 1462096185.343 WARN lxc_cgmanager - cgmanager.c:cgm_get:994 - do_cgm_get exited with error
lxc-start 1462096185.344 INFO lxc_lsm - lsm/lsm.c:lsm_init:48 - LSM security driver AppArmor
lxc-start 1462096185.344 INFO lxc_seccomp - seccomp.c:parse_config_v2:324 - processing: .reject_force_umount # comment this to allow umount -f; not recommended.
lxc-start 1462096185.344 INFO lxc_seccomp - seccomp.c:parse_config_v2:426 - Adding native rule for reject_force_umount action 0
lxc-start 1462096185.345 INFO lxc_seccomp - seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force umounts
 
bump same issue here.
EDIT: I'm acually able to create new Containers with the gui. However they do not start either :/
EDIT2: I did a fresh install of the host. Downloaded Ubuntu 15.10 template and created a new container. I still can't start a LXC. :(
EDIT3: I was able to make a 15.10 LXC on my proxmox drive that runs the OS. It will start up. However when I make a LXC on my Zpool it refuses to start. What would cause this?
EDIT4: I created a brand new Zpool on a usb drive to test. I was able to create and start a LXC on that pool. So there is somting different about my data pool from the new pool. For some reason whatever is different did not matter before 4.2....
EIDT5: So i've found the root cause for my LXC issue. zfs file system must have case senstivity set to sensitive. If ti's set to insensitive then LXC will refuse to boot. I really wish that proxmox would set the case senstivity to the correct type when it creates the subvol for you. It would have saved me alot of headaces.

root@proxmox:~# lxc-start -n 102 -F
unable to open file '/fastboot.tmp.16946' - No such file or directory
error in setup task PVE::LXC::Setup::pre_start_hook
lxc-start: conf.c: run_buffer: 342 Script exited with status 1
lxc-start: start.c: lxc_init: 436 failed to run pre-start hooks for container '102'.
lxc-start: start.c: __lxc_start: 1170 failed to initialize the container
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
 
Last edited:
EIDT5: So i've found the root cause for my LXC issue. zfs file system must have case senstivity set to sensitive. If ti's set to insensitive then LXC will refuse to boot. I really wish that proxmox would set the case senstivity to the correct type when it creates the subvol for you. It would have saved me alot of headaces.

"sensitive" is the default value for zfs datasets, and I see nothing in our installer or other code base that would change this.. can you post the output of "zfs get casesensitivity -r POOL" with POOL obviously replaced with your affected zfs pool
 
"sensitive" is the default value for zfs datasets, and I see nothing in our installer or other code base that would change this.. can you post the output of "zfs get casesensitivity -r POOL" with POOL obviously replaced with your affected zfs pool

I Think you are mistaken. I have no doubt that a pool created with proxmox the default type would be sensitive.

I'm talking about the datasets that proxmox creates on pools. It's currently using inherantance. Whatever dataset that proxmox is making it's new "subvol" inside it's geting the setting from the parent. This is the standard behavior for dataset creation on zfs. However I'm saying because lxc on proxmox requires "sensitive" to work correctly that the proxmox scripts should also set that property when they create a new dataset for lxc.

My ZPool originated on a Mac Pro(insenitive was a must or final cut and some other software would have issues). I imported it into proxmox. It worked just fine with 4.1; however, something changed in 4.2 and I could no longer use lxc without "sensitive".

My Solution was to create a dataset on my main pool called "vm". tank/vm
I created it and set "sensitive" at creation time. Now zfs (and proxmox) will inhearet that setting when creating datasets inside vm.

I had to manually create datasets with matching names for each of my lxc and rsync them to the tank/vm/subvol...
now they all boot correctly.

I might be a edge case sense i'm using a pool that originated from a different machine. So I can understand if this is not a priority. But maybe some info in the wiki that lxc will crash would be good for us edge cases.
 
After a reboot of my node I could not start any container. Same error as TP. lxc-start -n 203 failed: eooro code 1.
It proved to be that apparmor was not running:

root@pve1:/var/log# service apparmor status
● apparmor.service - LSB: AppArmor initialization
Loaded: loaded (/etc/init.d/apparmor)
Active: inactive (dead)


After I restarted Apparmor I clould start the container(s) without problem.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!