I can't start LXC

Editor

Well-Known Member
Apr 26, 2017
108
1
58
Turkey
Hello,

There was no problem. After the server reset, the container is not opened again.

How can I fix it?
Thanks

Job for pve-container@101.service failed because the control process exited with error code.
See "systemctl status pve-container@101.service" and "journalctl -xe" for details.
TASK ERROR: command 'systemctl start pve-container@101' failed: exit code 1

Code:
root@prox:~# systemctl status pve-container@101.service
● pve-container@101.service - PVE LXC Container: 101
   Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sun 2018-02-11 22:37:04 EET; 17s ago
     Docs: man:lxc-start
           man:lxc
           man:pct
  Process: 25160 ExecStart=/usr/bin/lxc-start -n 101 (code=exited, status=1/FAILURE)

Feb 11 22:37:03 prox systemd[1]: Starting PVE LXC Container: 101...
Feb 11 22:37:04 prox lxc-start[25160]: lxc-start: 101: lxccontainer.c: wait_on_daemonized_start: 751 No such file or directory - Failed to receive the container state
Feb 11 22:37:04 prox lxc-start[25160]: lxc-start: 101: tools/lxc_start.c: main: 371 The container failed to start.
Feb 11 22:37:04 prox lxc-start[25160]: lxc-start: 101: tools/lxc_start.c: main: 373 To get more details, run the container in foreground mode.
Feb 11 22:37:04 prox lxc-start[25160]: lxc-start: 101: tools/lxc_start.c: main: 375 Additional information can be obtained by setting the --logfile and --logpriority
Feb 11 22:37:04 prox systemd[1]: pve-container@101.service: Control process exited, code=exited status=1
Feb 11 22:37:04 prox systemd[1]: Failed to start PVE LXC Container: 101.
Feb 11 22:37:04 prox systemd[1]: pve-container@101.service: Unit entered failed state.
Feb 11 22:37:04 prox systemd[1]: pve-container@101.service: Failed with result 'exit-code'.



root@prox:~# pveversion -v
proxmox-ve: 5.1-38 (running kernel: 4.13.13-5-pve)
pve-manager: 5.1-43 (running version: 5.1-43/bdb08029)
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.13.8-2-pve: 4.13.8-28
pve-kernel-4.13.13-4-pve: 4.13.13-35
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.13.8-3-pve: 4.13.8-30
pve-kernel-4.13.8-1-pve: 4.13.8-27
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-3-pve: 4.13.13-34
pve-kernel-4.13.13-1-pve: 4.13.13-31
pve-kernel-4.10.17-3-pve: 4.10.17-23
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-20
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-16
pve-qemu-kvm: 2.9.1-6
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.4-pve2~bpo9
openvswitch-switch: 2.7.0-2

root@prox:~# pct config 101
arch: amd64
cores: 4
hostname: .....
memory: 8192
net0: name=eth0,bridge=vmbr0,hwaddr=82:32:03:75:85:17,ip=dhcp,ip6=dhcp,type=veth
onboot: 1
ostype: centos
rootfs: local-lvm:vm-101-disk-1,size=950G
swap: 4096


Code:
root@prox:~# systemctl status lxc@101.service
● lxc@101.service - LXC Container: 101
   Loaded: loaded (/lib/systemd/system/lxc@.service; disabled; vendor preset: enabled)
  Drop-In: /lib/systemd/system/lxc@.service.d
           └─pve-reboot.conf
   Active: inactive (dead)
     Docs: man:lxc-start
           man:lxc

-- The start-up result is done.
Feb 11 22:59:26 prox pvedaemon[1848]: <root@pam> starting task UPID:prox:00001FD8:000112D5:5A80AEAE:vzstart:101:root@pam:
Feb 11 22:59:26 prox pvedaemon[8152]: starting CT 101: UPID:prox:00001FD8:000112D5:5A80AEAE:vzstart:101:root@pam:
Feb 11 22:59:26 prox systemd[1]: Starting PVE LXC Container: 101...
-- Subject: Unit pve-container@101.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit pve-container@101.service has begun starting up.
Feb 11 22:59:27 prox kernel: JBD2: Invalid checksum recovering block 4 in log
Feb 11 22:59:27 prox kernel: JBD2: recovery failed
Feb 11 22:59:27 prox kernel: EXT4-fs (dm-8): error loading journal
Feb 11 22:59:27 prox lxc-start[8154]: lxc-start: 101: lxccontainer.c: wait_on_daemonized_start: 751 No such file or directory - Failed to receive the container state
Feb 11 22:59:27 prox lxc-start[8154]: lxc-start: 101: tools/lxc_start.c: main: 371 The container failed to start.
Feb 11 22:59:27 prox lxc-start[8154]: lxc-start: 101: tools/lxc_start.c: main: 373 To get more details, run the container in foreground mode.
Feb 11 22:59:27 prox lxc-start[8154]: lxc-start: 101: tools/lxc_start.c: main: 375 Additional information can be obtained by setting the --logfile and --logpriority options.
Feb 11 22:59:27 prox systemd[1]: pve-container@101.service: Control process exited, code=exited status=1
Feb 11 22:59:27 prox systemd[1]: Failed to start PVE LXC Container: 101.
-- Subject: Unit pve-container@101.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit pve-container@101.service has failed.
--
-- The result is failed.
Feb 11 22:59:27 prox pvedaemon[1849]: unable to get PID for CT 101 (not running?)
Feb 11 22:59:27 prox systemd[1]: pve-container@101.service: Unit entered failed state.
Feb 11 22:59:27 prox systemd[1]: pve-container@101.service: Failed with result 'exit-code'.
Feb 11 22:59:27 prox pvedaemon[8152]: command 'systemctl start pve-container@101' failed: exit code 1
Feb 11 22:59:27 prox pvedaemon[1848]: <root@pam> end task UPID:prox:00001FD8:000112D5:5A80AEAE:vzstart:101:root@pam: command 'systemctl start pve-container@101' failed: exit code 1
 
Last edited:
What error message do you get when your start the container in foreground:

# lxc-start -n 101 -F
 
What error message do you get when your start the container in foreground:

# lxc-start -n 101 -F

Good morning Dietmar,

Code:
root@prox:~# lxc-start -n 101 -F
lxc-start: 101: conf.c: run_buffer: 438 Script exited with status 32.
lxc-start: 101: start.c: lxc_init: 651 Failed to run lxc.hook.pre-start for container "101".
lxc-start: 101: start.c: __lxc_start: 1444 Failed to initialize container "101".
lxc-start: 101: tools/lxc_start.c: main: 371 The container failed to start.
lxc-start: 101: tools/lxc_start.c: main: 375 Additional information can be obtained by setting the --logfile and --logpriority options.
root@prox:~#
 
Is it possible to mount the container? Try:

# pct mount 101
# pct unmount 101

Any errors? Also check the syslog, maybe there are some errors there?
 
Is it possible to mount the container? Try:

# pct mount 101
# pct unmount 101

Any errors? Also check the syslog, maybe there are some errors there?

I getting this error:

Code:
root@prox:~# pct unmount 101
root@prox:~# pct mount 101
mount: /dev/mapper/pve-vm--101--disk--1: can't read superblock
mounting container failed
command 'mount /dev/dm-8 /var/lib/lxc/101/rootfs//' failed: exit code 32
 
@dietmar i also faced the exact same problem. The only differences is that i could mount the drive

root@local:~# pveversion -v
proxmox-ve: 5.1-42 (running kernel: 4.4.114-1-pve)
pve-manager: 5.1-46 (running version: 5.1-46/ae8241d4)
pve-kernel-4.13: 5.1-43
pve-kernel-4.13.16-1-pve: 4.13.16-45
pve-kernel-4.4.114-1-pve: 4.4.114-108
pve-kernel-4.2.8-1-pve: 4.2.8-41
corosync: 2.4.2-pve3
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-common-perl: 5.0-28
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-17
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 2.1.1-3
lxcfs: 2.0.8-2
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-11
pve-cluster: 5.0-20
pve-container: 2.0-19
pve-docs: 5.1-16
pve-firewall: 3.0-5
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.9.1-9
pve-xtermjs: 1.0-2
qemu-server: 5.0-22
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3


root@local:~# pct config 103
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: localhost
lock: mounted
memory: 512
nameserver: 8.8.8.8 8.8.4.4
net0: bridge=vmbr1,gw=192.168.100.1,hwaddr=32:62:61:38:63:37,ip=192.168.100.104/24,name=eth0,type=veth
onboot: 1
ostype: debian
rootfs: local:103/vm-103-disk-1.raw,size=150G
searchdomain: localhost
swap: 512

root@local:~# lxc-start -n 103 -F
lxc-start: 103: conf.c: run_buffer: 438 Script exited with status 32.
lxc-start: 103: start.c: lxc_init: 651 Failed to run lxc.hook.pre-start for container "103".
lxc-start: 103: start.c: __lxc_start: 1444 Failed to initialize container "103".
lxc-start: 103: tools/lxc_start.c: main: 371 The container failed to start.
lxc-start: 103: tools/lxc_start.c: main: 375 Additional information can be obtained by setting the --logfile and --logpriority options.

root@local:~# systemctl status pve-container@103.service
pve-container@103.service - PVE LXC Container: 103
Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-04-08 04:14:30 PDT; 10s ago
Docs: man:lxc-start
man:lxc
man:pct
Process: 27811 ExecStart=/usr/bin/lxc-start -n 103 (code=exited, status=1/FAILURE)

journalctl -xe shows the below error

Apr 08 04:14:29 ssd systemd[1]: Starting PVE LXC Container: 103...
Apr 08 04:14:30 ssd lxc-start[27811]: lxc-start: 103: lxccontainer.c: wait_on_daemonized_start: 751 No such file or directory - Failed to receive the container state
Apr 08 04:14:30 ssd lxc-start[27811]: lxc-start: 103: tools/lxc_start.c: main: 371 The container failed to start.
Apr 08 04:14:30 ssd lxc-start[27811]: lxc-start: 103: tools/lxc_start.c: main: 373 To get more details, run the container in foreground mode.
Apr 08 04:14:30 ssd lxc-start[27811]: lxc-start: 103: tools/lxc_start.c: main: 375 Additional information can be obtained by setting the --logfile and --logpriority options.
Apr 08 04:14:30 ssd systemd[1]: pve-container@103.service: Control process exited, code=exited status=1
Apr 08 04:14:30 ssd systemd[1]: Failed to start PVE LXC Container: 103.
Apr 08 04:14:30 ssd systemd[1]: pve-container@103.service: Unit entered failed state.
Apr 08 04:14:30 ssd systemd[1]: pve-container@103.service: Failed with result 'exit-code'.

-- Unit pve-container@103.service has begun starting up.
Apr 08 04:14:30 ssd lxc-start[27811]: lxc-start: 103: lxccontainer.c: wait_on_daemonized_start: 751 No such file or directory - Failed to receive the container state
Apr 08 04:14:30 ssd lxc-start[27811]: lxc-start: 103: tools/lxc_start.c: main: 371 The container failed to start.
Apr 08 04:14:30 ssd lxc-start[27811]: lxc-start: 103: tools/lxc_start.c: main: 373 To get more details, run the container in foreground mode.
Apr 08 04:14:30 ssd lxc-start[27811]: lxc-start: 103: tools/lxc_start.c: main: 375 Additional information can be obtained by setting the --logfile and --logpriority options.
Apr 08 04:14:30 ssd systemd[1]: pve-container@103.service: Control process exited, code=exited status=1
Apr 08 04:14:30 ssd pct[27809]: command 'systemctl start pve-container@103' failed: exit code 1
Apr 08 04:14:30 ssd systemd[1]: Failed to start PVE LXC Container: 103.

Apr 08 04:14:30 ssd systemd[1]: pve-container@103.service: Unit entered failed state.
Apr 08 04:14:30 ssd systemd[1]: pve-container@103.service: Failed with result 'exit-code'.
Apr 08 04:14:30 ssd pct[27803]: <root@pam> end task UPID:ssd:00006CA1:00055B4E:5AC9F995:vzstart:103:root@pam: command 'systemctl start pve-container@103' failed: exit code 1
Apr 08 04:15:00 ssd systemd[1]: Starting Proxmox VE replication runner...

-- Unit pvesr.service has finished starting up.
--
-- The start-up result is done.
Apr 08 04:15:01 ssd CRON[27879]: pam_unix(cron:session): session opened for user root by (uid=0)
Apr 08 04:15:01 ssd CRON[27880]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Apr 08 04:15:01 ssd CRON[27879]: pam_unix(cron:session): session closed for user root

any help would be appreciated!
 
Last edited:
I have the same problem as @claylua. Can mount the drive, can not start the container. Msgs in logs are nearly identical.

Did you manage to solve this?

Jernej
 
I had the same problem, with the same error messages, and I could mount, but not start, the container.

Running lxc-start with a debug log lead me to these lines;

Code:
lxc-start 111 20190619122541.747 INFO     conf - conf.c:run_script_argv:356 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "111", config section "lxc"
lxc-start 111 20190619122542.354 DEBUG    conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 111 lxc pre-start with output: unable to open file '/fastboot.tmp.309' - Disk quota exceeded

lxc-start 111 20190619122542.356 DEBUG    conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 111 lxc pre-start with output: error in setup task PVE::LXC::Setup::pre_start_hook

lxc-start 111 20190619122542.367 ERROR    conf - conf.c:run_buffer:335 - Script exited with status 1
lxc-start 111 20190619122542.368 ERROR    start - start.c:lxc_init:861 - Failed to run lxc.hook.pre-start for container "111"

Sure enough, the container's disk was full (I use ZFS datasets for my containers):

Code:
rpool/data/subvol-111-disk-0   60G   60G     0 100% /rpool/data/subvol-111-disk-0

Might not be the problem in your case, but I was able to start the container once I'd increased the quota.
 
I have here a more strange problem starting a lxc container after rebooting the proxmox server:
Manually mounting container works:
pct mount 108
mounted CT 108 in '/var/lib/lxc/108/rootfs'
but:
ls -l /var/lib/lxc/108/rootfs/
total 1
drwxr-xr-x 2 root root 2 Jun 29 17:52 dev
show only empty folder "dev".
Everything else disappeared.
zfs filesystem still shows correct used data:
zfs list tank/vmdata/subvol-108-disk-0
NAME USED AVAIL REFER MOUNTPOINT
tank/vmdata/subvol-108-disk-0 4.28T 2.11T 4.28T /tank/vmdata/subvol-108-disk-0
 
I have here a more strange problem starting a lxc container after rebooting the proxmox server:
Manually mounting container works:
pct mount 108
mounted CT 108 in '/var/lib/lxc/108/rootfs'
but:
ls -l /var/lib/lxc/108/rootfs/
total 1
drwxr-xr-x 2 root root 2 Jun 29 17:52 dev
show only empty folder "dev".
Everything else disappeared.
zfs filesystem still shows correct used data:
zfs list tank/vmdata/subvol-108-disk-0
NAME USED AVAIL REFER MOUNTPOINT
tank/vmdata/subvol-108-disk-0 4.28T 2.11T 4.28T /tank/vmdata/subvol-108-disk-0

Hi there,
i have the same problem. Yesterday i've migrated some CT and VM to my new Node in the Cluster.
Sometimes i read something about apparmor, but ... found no solutions.
All other migrated CTs have this problem, but new start without any problems.

Code:
root@pve003:~# cat 105.log
lxc-start 105 20190731055044.484 INFO     confile - confile.c:set_config_idmaps:1673 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 105 20190731055044.484 INFO     confile - confile.c:set_config_idmaps:1673 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 105 20190731055044.485 INFO     lxccontainer - lxccontainer.c:do_lxcapi_start:984 - Set process title to [lxc monitor] /var/lib/lxc 105
lxc-start 105 20190731055044.486 INFO     lsm - lsm/lsm.c:lsm_init:50 - LSM security driver AppArmor
lxc-start 105 20190731055044.486 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for reject_force_umount action 0(kill)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for reject_force_umount action 0(kill)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for reject_force_umount action 0(kill)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for reject_force_umount action 0(kill)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "[all]"
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "kexec_load errno 1"
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for kexec_load action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for kexec_load action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for kexec_load action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for kexec_load action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "open_by_handle_at errno 1"
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for open_by_handle_at action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for open_by_handle_at action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for open_by_handle_at action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for open_by_handle_at action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "init_module errno 1"
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for init_module action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for init_module action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for init_module action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for init_module action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "finit_module errno 1"
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for finit_module action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for finit_module action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for finit_module action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for finit_module action 327681(errno)
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "delete_module errno 1"
lxc-start 105 20190731055044.487 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for delete_module action 327681(errno)
lxc-start 105 20190731055044.488 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for delete_module action 327681(errno)
lxc-start 105 20190731055044.488 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for delete_module action 327681(errno)
lxc-start 105 20190731055044.488 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for delete_module action 327681(errno)
lxc-start 105 20190731055044.488 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "keyctl errno 38"
lxc-start 105 20190731055044.488 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for keyctl action 327718(errno)
lxc-start 105 20190731055044.488 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for keyctl action 327718(errno)
lxc-start 105 20190731055044.488 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for keyctl action 327718(errno)
lxc-start 105 20190731055044.488 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for keyctl action 327718(errno)
lxc-start 105 20190731055044.488 INFO     seccomp - seccomp.c:parse_config_v2:970 - Merging compat seccomp contexts into main context
lxc-start 105 20190731055044.488 INFO     conf - conf.c:run_script_argv:356 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "105", config section "lxc"
lxc-start 105 20190731055045.590 DEBUG    conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 105 lxc pre-start with output: unable to detect OS distribution

lxc-start 105 20190731055045.599 ERROR    conf - conf.c:run_buffer:335 - Script exited with status 2
lxc-start 105 20190731055045.599 ERROR    start - start.c:lxc_init:861 - Failed to run lxc.hook.pre-start for container "105"
lxc-start 105 20190731055045.599 ERROR    start - start.c:__lxc_start:1944 - Failed to initialize container "105"
lxc-start 105 20190731055045.637 DEBUG    lxccontainer - lxccontainer.c:wait_on_daemonized_start:853 - First child 32193 exited
lxc-start 105 20190731055045.637 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:856 - No such file or directory - Failed to receive the container state
lxc-start 105 20190731055045.637 ERROR    lxc_start - tools/lxc_start.c:main:330 - The container failed to start
lxc-start 105 20190731055045.637 ERROR    lxc_start - tools/lxc_start.c:main:333 - To get more details, run the container in foreground mode
lxc-start 105 20190731055045.637 ERROR    lxc_start - tools/lxc_start.c:main:336 - Additional information can be obtained by setting the --logfile and --logpriority options
 
I have here a more strange problem starting a lxc container after rebooting the proxmox server:
Manually mounting container works:
pct mount 108
mounted CT 108 in '/var/lib/lxc/108/rootfs'
but:
ls -l /var/lib/lxc/108/rootfs/
total 1
drwxr-xr-x 2 root root 2 Jun 29 17:52 dev
show only empty folder "dev".
Everything else disappeared.
zfs filesystem still shows correct used data:
zfs list tank/vmdata/subvol-108-disk-0
NAME USED AVAIL REFER MOUNTPOINT
tank/vmdata/subvol-108-disk-0 4.28T 2.11T 4.28T /tank/vmdata/subvol-108-disk-0

Hi there,
i have the same problem. Yesterday i've migrated some CT and VM to my new Node in the Cluster.
Sometimes i read something about apparmor, but ... found no solutions.
All other migrated CTs have this problem, but new start without any problems.

Did you find a solution. I have the exact same problem after upgrading to 6.0.
 
Did you find a solution. I have the exact same problem after upgrading to 6.0.
Hi!
Only:
1#
Code:
rm -rfv /$POOLNAME/sub...105/*
for each LXC and then
Code:
zfs mount -a
to mount them

i see, when the system starts alot of AppArmor DENIED messages

after reboot, the same sh** again. No more ideas yet :/
 
  • Like
Reactions: marcelprox
Hi gents,
I ran into the same problem just today. VMs do start, containers don't. I've tried everything I can think of or that I read online, but nothing seems to be working. Any solution yet?

==============
Update:
reboot still doesn't work, but if I mount the dataset with the subvols by hand each time, the containers start and work. I'll try to avoid reboots... but still more of a workaround than a propper fix.

I needed to delete some folders for the datasets to mount, as zfs saw them es not empty.
So pretty much as @admoin
 
Last edited:
I have the exact same issue after upgrading to 6.
Any ideas how to solve that?
 
I have the exact same issue after upgrading to 6.
Any ideas how to solve that?

What I did was to unmount every ZFS dataset except rpool, cleaned the folders to mount to and remounted everything. After that the containers will boot again. Reboot seems to work again. My gues is that the filesystem/mount points didn't "like" the upgrade.
 
What I did was to unmount every ZFS dataset except rpool, cleaned the folders to mount to and remounted everything. After that the containers will boot again. Reboot seems to work again. My gues is that the filesystem/mount points didn't "like" the upgrade.
Thanks for your reply.

I am not sure how to do that.

My root fs runs on ext4.

Mount shows me:
Code:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=32911264k,nr_inodes=8227816,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=6587392k,mode=755)
/dev/md0 on / type ext4 (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14893)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=6587392k,mode=700)

How can i unmount and remount my zfs volumes?
I tried pct mount but it did not make any difference.
 
I got it.

Removed dev dirs in subvolumes.
zfs unmount nmve1/subvol....
zfs mount nvme1/subvol....
lxc-start works fine again.

No idea what is going in there but it worked.

Thanks again,.
 
  • Like
Reactions: marcelprox

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!