[SOLVED] Reboot of PVE Host breaks LXC Container startup

clemens

New Member
Jun 27, 2019
13
0
1
35
Hi Proxmox-Community,

currently i am struggling with properly setting up some LXC containers that i obtained from the available LXC templates in Proxmox.

I am on a freshly installed non-subscription proxmox, that i just updated with dist-upgrade to 5.4-7.

For now i want do setup a mongoDB and Gitea LXC machine.

Ticking "Unpriviledged container" in the "Create CT" dialog results in an error during creation:

Code:
extracting archive '/var/lib/vz/template/cache/debian-9-turnkey-mongodb_15.0-1_amd64.tar.gz'
tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation not permitted
tar: ./var/spool/postfix/dev/random: Cannot mknod: Operation not permitted
Total bytes read: 1045647360 (998MiB, 168MiB/s)
tar: Exiting with failure status due to previous errors
TASK ERROR: unable to create CT 104 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - -z --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/104/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2

Unticking "Unpriviledged container" in the "Create CT" dialog results i a startable container that can be setup.

BUT after rebooting the PVE Host i am unable to start the containers again.
I get those errors:

Code:
Job for pve-container@100.service failed because the control process exited with error code.
See "systemctl status pve-container@100.service" and "journalctl -xe" for details.
TASK ERROR: command 'systemctl start pve-container@100' failed: exit code 1

Code:
root@pve:~# systemctl status pve-container@100.service
  pve-container@100.service - PVE LXC Container: 100
   Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Thu 2019-06-27 16:20:04 CEST; 21min ago
     Docs: man:lxc-start
           man:lxc
           man:pct
  Process: 2405 ExecStart=/usr/bin/lxc-start -n 100 (code=exited, status=1/FAILURE)

Jun 27 16:20:04 pve systemd[1]: Starting PVE LXC Container: 100...
Jun 27 16:20:04 pve lxc-start[2405]: lxc-start: 100: lxccontainer.c: wait_on_daemonized_start: 856 No such file or directory - Fa
Jun 27 16:20:04 pve lxc-start[2405]: lxc-start: 100: tools/lxc_start.c: main: 330 The container failed to start
Jun 27 16:20:04 pve lxc-start[2405]: lxc-start: 100: tools/lxc_start.c: main: 333 To get more details, run the container in foreg
Jun 27 16:20:04 pve lxc-start[2405]: lxc-start: 100: tools/lxc_start.c: main: 336 Additional information can be obtained by setti
Jun 27 16:20:04 pve systemd[1]: pve-container@100.service: Control process exited, code=exited status=1
Jun 27 16:20:04 pve systemd[1]: Failed to start PVE LXC Container: 100.
Jun 27 16:20:04 pve systemd[1]: pve-container@100.service: Unit entered failed state.
Jun 27 16:20:04 pve systemd[1]: pve-container@100.service: Failed with result 'exit-code'.

Code:
root@pve:~# journalctl -xe
-- Unit pve-container@100.service has begun starting up.
Jun 27 16:43:32 pve lxc-start[27779]: lxc-start: 100: lxccontainer.c: wait_on_daemonized_start: 856 No such file or directory - Failed to receive the container
Jun 27 16:43:32 pve lxc-start[27779]: lxc-start: 100: tools/lxc_start.c: main: 330 The container failed to start
Jun 27 16:43:32 pve lxc-start[27779]: lxc-start: 100: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
Jun 27 16:43:32 pve lxc-start[27779]: lxc-start: 100: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpri
Jun 27 16:43:32 pve systemd[1]: pve-container@100.service: Control process exited, code=exited status=1
Jun 27 16:43:32 pve systemd[1]: Failed to start PVE LXC Container: 100.
-- Subject: Unit pve-container@100.service has failed
-- Defined-By: systemd
-- Support:
--
-- Unit pve-container@100.service has failed.
--
-- The result is failed.
Jun 27 16:43:32 pve pvedaemon[2319]: unable to get PID for CT 100 (not running?)
Jun 27 16:43:32 pve systemd[1]: pve-container@100.service: Unit entered failed state.
Jun 27 16:43:32 pve systemd[1]: pve-container@100.service: Failed with result 'exit-code'.
Jun 27 16:43:32 pve pvedaemon[27777]: command 'systemctl start pve-container@100' failed: exit code 1
Jun 27 16:43:32 pve pvedaemon[2320]: <root@pam> end task UPID:pve:00006C81:00022946:5D14D613:vzstart:100:root@pam: command 'systemctl start pve-container@100'
Jun 27 16:44:00 pve systemd[1]: Starting Proxmox VE replication runner...
-- Subject: Unit pvesr.service has begun start-up
-- Defined-By: systemd
-- Support:
-- Unit pvesr.service has begun starting up.
Jun 27 16:44:00 pve systemd[1]: Started Proxmox VE replication runner.
-- Subject: Unit pvesr.service has finished start-up
-- Defined-By: systemd
-- Support:
--
-- Unit pvesr.service has finished starting up.
--
-- The start-up result is done.
Jun 27 16:44:36 pve pvedaemon[2320]: <root@pam> starting task UPID:pve:00006F85:000242A1:5D14D654:vncproxy:102:root@pam:
Jun 27 16:44:36 pve pvedaemon[28549]: starting lxc termproxy UPID:pve:00006F85:000242A1:5D14D654:vncproxy:102:root@pam:
Jun 27 16:44:37 pve pvedaemon[28549]: command '/usr/bin/termproxy 5901 --path /vms/102 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole102 -r
Jun 27 16:44:37 pve pvedaemon[2320]: <root@pam> end task UPID:pve:00006F85:000242A1:5D14D654:vncproxy:102:root@pam: command '/usr/bin/termproxy 5901 --path /vm
Jun 27 16:44:37 pve pvedaemon[2318]: <root@pam> starting task UPID:pve:00006FCC:000242E2:5D14D655:vncproxy:100:root@pam:
Jun 27 16:44:37 pve pvedaemon[28620]: starting lxc termproxy UPID:pve:00006FCC:000242E2:5D14D655:vncproxy:100:root@pam:
Jun 27 16:44:37 pve pvedaemon[2320]: <root@pam> successful auth for user 'root@pam'
Jun 27 16:44:38 pve pvedaemon[2318]: <root@pam> end task UPID:pve:00006FCC:000242E2:5D14D655:vncproxy:100:root@pam: OK
Jun 27 16:44:38 pve pvedaemon[2320]: <root@pam> starting task UPID:pve:00006FD8:0002436B:5D14D656:vzstart:100:root@pam:
Jun 27 16:44:38 pve pvedaemon[28632]: starting CT 100: UPID:pve:00006FD8:0002436B:5D14D656:vzstart:100:root@pam:
Jun 27 16:44:38 pve systemd[1]: Starting PVE LXC Container: 100...
-- Subject: Unit pve-container@100.service has begun start-up
-- Defined-By: systemd
-- Support:
--
-- Unit pve-container@100.service has begun starting up.
Jun 27 16:44:39 pve lxc-start[28634]: lxc-start: 100: lxccontainer.c: wait_on_daemonized_start: 856 No such file or directory - Failed to receive the container
Jun 27 16:44:39 pve lxc-start[28634]: lxc-start: 100: tools/lxc_start.c: main: 330 The container failed to start
Jun 27 16:44:39 pve lxc-start[28634]: lxc-start: 100: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
Jun 27 16:44:39 pve lxc-start[28634]: lxc-start: 100: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpri
Jun 27 16:44:39 pve systemd[1]: pve-container@100.service: Control process exited, code=exited status=1
Jun 27 16:44:39 pve systemd[1]: Failed to start PVE LXC Container: 100.
-- Subject: Unit pve-container@100.service has failed
-- Defined-By: systemd
-- Support:
--
-- Unit pve-container@100.service has failed.
--
-- The result is failed.
Jun 27 16:44:39 pve systemd[1]: pve-container@100.service: Unit entered failed state.
Jun 27 16:44:39 pve systemd[1]: pve-container@100.service: Failed with result 'exit-code'.
Jun 27 16:44:39 pve pvedaemon[28632]: command 'systemctl start pve-container@100' failed: exit code 1
Jun 27 16:44:39 pve pvedaemon[2320]: <root@pam> end task UPID:pve:00006FD8:0002436B:5D14D656:vzstart:100:root@pam: command 'systemctl start pve-container@100'

Am i missing something here, or is LXC actually broken at the moment?
Is the feature working in the subscription repo?

We are considering to get a subsciption, but i first wanted to test our workflow...

Cheers,
Clemens

PS: my pveversion -v print
Code:
root@pve:~# pveversion -v
proxmox-ve: 5.4-1 (running kernel: 4.15.18-16-pve)
pve-manager: 5.4-7 (running version: 5.4-7/fc10404a)
pve-kernel-4.15: 5.4-4
pve-kernel-4.15.18-16-pve: 4.15.18-41
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-10
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-52
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-43
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-37
pve-container: 2.0-39
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-53
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 
Last edited:

oguz

Proxmox Staff Member
Staff member
Nov 19, 2018
4,435
532
118
hi,

Ticking "Unpriviledged container" in the "Create CT" dialog results in an error during creation:
this is normal behaviour since device nodes are not supported in unprivileged containers. you will have to create this one as privileged.

for the other problem;

try running your container like this:
Code:
lxc-start -n CTID -F -l DEBUG -o /tmp/lxc-CTID.log

this will create a debug log in /tmp/lxc-CTID.log, paste this log here so we can try to figure out what's going wrong.

also send the output of `pct config CTID`
 

clemens

New Member
Jun 27, 2019
13
0
1
35
thanky for your reply, here the logs:

Code:
root@pve:/# pct config 100
arch: amd64
cores: 2
hostname: mongoDB
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.133,hwaddr=AA:1F:A5:F8:E9:A6,ip=192.168.1.210/24,type=veth
onboot: 1
ostype: debian
rootfs: lxc:subvol-100-disk-0,size=10G
swap: 4096
root@pve:/# pct config 101
arch: amd64
cores: 2
hostname: gitea
memory: 4096
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.133,hwaddr=9A:61:AB:65:87:9C,ip=192.168.1.211/24,type=veth
onboot: 1
ostype: debian
rootfs: lxc:subvol-101-disk-0,size=10G
swap: 4096

Code:
root@pve:/# cat /tmp/lxc-100.log
lxc-start 100 20190628115816.651 INFO     lsm - lsm/lsm.c:lsm_init:50 - LSM security driver AppArmor
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for reject_force_umount action 0(kill)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for reject_force_umount action 0(kill)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for reject_force_umount action 0(kill)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for reject_force_umount action 0(kill)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "[all]"
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "kexec_load errno 1"
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for kexec_load action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for kexec_load action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for kexec_load action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for kexec_load action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "open_by_handle_at errno 1"
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for open_by_handle_at action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for open_by_handle_at action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for open_by_handle_at action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for open_by_handle_at action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "init_module errno 1"
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for init_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for init_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for init_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for init_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "finit_module errno 1"
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for finit_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for finit_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for finit_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for finit_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "delete_module errno 1"
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for delete_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for delete_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for delete_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for delete_module action 327681(errno)
lxc-start 100 20190628115816.651 INFO     seccomp - seccomp.c:parse_config_v2:970 - Merging compat seccomp contexts into main context
lxc-start 100 20190628115816.651 INFO     conf - conf.c:run_script_argv:356 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "100", config section "lxc"
lxc-start 100 20190628115816.923 DEBUG    conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100 lxc pre-start with output: unable to detect OS distribution

lxc-start 100 20190628115816.927 ERROR    conf - conf.c:run_buffer:335 - Script exited with status 2
lxc-start 100 20190628115816.927 ERROR    start - start.c:lxc_init:861 - Failed to run lxc.hook.pre-start for container "100"
lxc-start 100 20190628115816.927 ERROR    start - start.c:__lxc_start:1944 - Failed to initialize container "100"
lxc-start 100 20190628115816.927 ERROR    lxc_start - tools/lxc_start.c:main:330 - The container failed to start
lxc-start 100 20190628115816.928 ERROR    lxc_start - tools/lxc_start.c:main:336 - Additional information can be obtained by setting the --logfile and --logpriority options

Code:
root@pve:/# cat /tmp/lxc-101.log
lxc-start 101 20190628120022.381 INFO     lsm - lsm/lsm.c:lsm_init:50 - LSM security driver AppArmor
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for reject_force_umount action 0(kill)
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for reject_force_umount action 0(kill)
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for reject_force_umount action 0(kill)
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for reject_force_umount action 0(kill)
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "[all]"
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "kexec_load errno 1"
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for kexec_load action 327681(errno)
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for kexec_load action 327681(errno)
lxc-start 101 20190628120022.382 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for kexec_load action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for kexec_load action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "open_by_handle_at errno 1"
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for open_by_handle_at action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for open_by_handle_at action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for open_by_handle_at action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for open_by_handle_at action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "init_module errno 1"
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for init_module action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for init_module action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for init_module action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for init_module action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "finit_module errno 1"
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for finit_module action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for finit_module action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for finit_module action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for finit_module action 327681(errno)
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:759 - Processing "delete_module errno 1"
lxc-start 101 20190628120022.383 INFO     seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for delete_module action 327681(errno)
lxc-start 101 20190628120022.384 INFO     seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for delete_module action 327681(errno)
lxc-start 101 20190628120022.384 INFO     seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for delete_module action 327681(errno)
lxc-start 101 20190628120022.384 INFO     seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for delete_module action 327681(errno)
lxc-start 101 20190628120022.384 INFO     seccomp - seccomp.c:parse_config_v2:970 - Merging compat seccomp contexts into main context
lxc-start 101 20190628120022.384 INFO     conf - conf.c:run_script_argv:356 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "101", config section "lxc"
lxc-start 101 20190628120022.306 DEBUG    conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start with output: unable to detect OS distribution

lxc-start 101 20190628120022.311 ERROR    conf - conf.c:run_buffer:335 - Script exited with status 2
lxc-start 101 20190628120022.311 ERROR    start - start.c:lxc_init:861 - Failed to run lxc.hook.pre-start for container "101"
lxc-start 101 20190628120022.311 ERROR    start - start.c:__lxc_start:1944 - Failed to initialize container "101"
lxc-start 101 20190628120022.311 ERROR    lxc_start - tools/lxc_start.c:main:330 - The container failed to start
lxc-start 101 20190628120022.311 ERROR    lxc_start - tools/lxc_start.c:main:336 - Additional information can be obtained by setting the --logfile and --logpriority options
 

oguz

Proxmox Staff Member
Staff member
Nov 19, 2018
4,435
532
118
in both cases i see,

Code:
conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start with output: unable to detect OS distribution

which means there's probably something wrong with the container.

we check the os distribution by checking some files in /etc/ depending on the `ostype` that's written in the config. since they are both debian, it should be checking /etc/debian_version

can you check if this file exists? it needs to be there for the container to start.

if it's not there, then please check if /etc/ is populated or not. it might be an indicator of bigger problems.
 

clemens

New Member
Jun 27, 2019
13
0
1
35
there is no lxc-pve-prestart-hook present in /usr/share/lxc:

Code:
root@pve:/usr/share/lxc# ls
config  lxc.functions  lxc-patch.py            pve-container-stop-wrapper  templates
hooks   lxcnetaddbr    lxc-pve-reboot-trigger  selinux

/etc/ is populated:

Code:
root@pve:/etc# ls
adduser.conf            deluser.conf      insserv.conf.d  manpath.config  python2.7         smartd.conf
aliases                 dhcp              iproute2        mediaprm        python3           smartmontools
aliases.db              dpkg              iscsi           mime.types      python3.5         ssh
alternatives            environment       issue           mke2fs.conf     rc0.d             ssl
apm                     ethertypes        issue.net       modprobe.d      rc1.d             staff-group-for-usr-local
apparmor                fdmount.conf      kernel          modules         rc2.d             subgid
apparmor.d              fonts             ksmtuned.conf   modules-load.d  rc3.d             subgid-
apt                     fstab             kvm             motd            rc4.d             subuid
bash.bashrc             fuse.conf         ldap            mtab            rc5.d             subuid-
bash_completion         gai.conf          ld.so.cache     nanorc          rc6.d             sudoers.d
bash_completion.d       groff             ld.so.conf      netconfig       rc.d              sysctl.conf
bindresvport.blacklist  group             ld.so.conf.d    network         rcS.d             sysctl.d
binfmt.d                group-            libaudit.conf   networks        reportbug.conf    systemd
ca-certificates         grub.d            libnl-3         newt            request-key.conf  terminfo
ca-certificates.conf    gshadow           locale.alias    nsswitch.conf   request-key.d     timezone
calendar                gshadow-          locale.gen      opt             resolvconf        tmpfiles.d
ceph                    gss               localtime       os-release      resolv.conf       ucf.conf
cifs-utils              gssapi_mech.conf  logcheck        pam.conf        rmt               udev
console-setup           hdparm.conf       login.defs      pam.d           rpc               ufw
corosync                host.conf         logrotate.conf  passwd          rsyslog.conf      update-motd.d
cron.d                  hostid            logrotate.d     passwd-         rsyslog.d         vim
cron.daily              hostname          lvm             perl            samba             vzdump.conf
cron.hourly             hosts             lxc             postfix         securetty         wgetrc
cron.monthly            hosts.allow       lynx            ppp             security          X11
crontab                 hosts.deny        machine-id      profile         selinux           xdg
cron.weekly             idmapd.conf       magic           profile.d       services          zfs
dbus-1                  init              magic.mime      protocols       shadow
debconf.conf            init.d            mailcap         pulse           shadow-
debian_version          initramfs-tools   mailcap.order   pve             shells
default                 inputrc           mail.rc         python          skel


Code:
root@pve:/usr/share/lxc# cat /etc/debian_version
9.9

Also, i can run the freshly created privilidged containers. I just cannot run them after a restart...
Do you think a fresh install of Proxmox could solve the issue?
 

oguz

Proxmox Staff Member
Staff member
Nov 19, 2018
4,435
532
118
/usr/share/lxc:
check hooks directory

sorry, i meant the /etc/debian_version in the container rootfs.

if you can't start the ct, you can check like this:

Code:
$ pct mount 101
mounted CT 101 in '/var/lib/lxc/101/rootfs'
$ cat /var/lib/lxc/101/rootfs/etc/debian_version
9.9

edit:

/etc/ is populated:
also in the container rootfs
 

clemens

New Member
Jun 27, 2019
13
0
1
35
okay, sorry i overlooked the hook.

Code:
root@pve:/usr/share/lxc/hooks# ls
clonehostname  dhclient-script       lxc-pve-poststop-hook  mountecryptfsroot  squid-deb-proxy-client
dhclient       lxc-pve-autodev-hook  lxc-pve-prestart-hook  nvidia             ubuntu-cloud-prep

Code:
root@pve:/usr/share/lxc/hooks# cat lxc-pve-prestart-hook
#!/usr/bin/perl

package lxc_pve_prestart_hook;

use strict;
use warnings;

exit 0 if $ENV{LXC_NAME} && $ENV{LXC_NAME} !~ /^\d+$/;

use POSIX;
use File::Path;
use Fcntl ':mode';

use PVE::SafeSyslog;
use PVE::Tools;
use PVE::Cluster;
use PVE::INotify;
use PVE::RPCEnvironment;
use PVE::JSONSchema qw(get_standard_option);
use PVE::CLIHandler;
use PVE::Storage;
use PVE::LXC;
use PVE::LXC::Setup;

use base qw(PVE::CLIHandler);

__PACKAGE__->register_method ({
    name => 'lxc-pve-prestart-hook',
    path => 'lxc-pve-prestart-hook',
    method => 'GET',
    description => "Create a new container root directory.",
    parameters => {
        additionalProperties => 0,
        properties => {
            name => {
                description => "The container name. This hook is only active for containers using numeric IDs, where configuration is stored on /etc/pve/lxc/<name>.conf (else it is just a NOP).",
                type => 'string',
                pattern => '\S+',
                maxLength => 64,
            },
            path => {
                description => "The path to the container configuration directory (LXC internal argument - do not pass manually!).",
                type => 'string',
            },
            rootfs => {
                description => "The path to the container's rootfs (LXC internal argument - do not pass manually!)",
                type => 'string',
            },
        },
    },
    returns => { type => 'null' },

    code => sub {
        my ($param) = @_;

        return undef if $param->{name} !~ m/^\d+$/;

        my $vmid = $param->{name};
        my $skiplock_flag_fn = "/run/lxc/skiplock-$vmid";
        my $skiplock = 1 if -e $skiplock_flag_fn;
        unlink $skiplock_flag_fn if $skiplock;

        PVE::Cluster::check_cfs_quorum(); # only start if we have quorum

        return undef if ! -f PVE::LXC::Config->config_file($vmid);

        my $conf = PVE::LXC::Config->load_config($vmid);
        if (!$skiplock && !PVE::LXC::Config->has_lock($conf, 'mounted')) {
            PVE::LXC::Config->check_lock($conf);
        }

        my $storage_cfg = PVE::Storage::config();

        my $vollist = PVE::LXC::Config->get_vm_volumes($conf);
        my $loopdevlist = PVE::LXC::Config->get_vm_volumes($conf, 'rootfs');

        PVE::Storage::activate_volumes($storage_cfg, $vollist);

        my $rootdir = $param->{rootfs};

        # Delete any leftover reboot-trigger file
        unlink("/var/lib/lxc/$vmid/reboot");

        my $devlist_file = "/var/lib/lxc/$vmid/devices";
        unlink $devlist_file;
        my $devices = [];

        my $setup_mountpoint = sub {
            my ($ms, $mountpoint) = @_;

            #return if $ms eq 'rootfs';
            my (undef, undef, $dev) = PVE::LXC::mountpoint_mount($mountpoint, $rootdir, $storage_cfg);
            push @$devices, $dev if $dev && $mountpoint->{quota};
        };

        # Unmount first when the user mounted the container with "pct mount".
        eval {
            PVE::Tools::run_command(['umount', '--recursive', $rootdir], outfunc => sub {}, errfunc => sub {});
        };

        PVE::LXC::Config->foreach_mountpoint($conf, $setup_mountpoint);

        my $lxc_setup = PVE::LXC::Setup->new($conf, $rootdir);
        $lxc_setup->pre_start_hook();

        if (@$devices) {
            my $devlist = '';
            foreach my $dev (@$devices) {
                my ($mode, $rdev) = (stat($dev))[2,6];
                next if !$mode || !S_ISBLK($mode) || !$rdev;
                my $major = PVE::Tools::dev_t_major($rdev);
                my $minor = PVE::Tools::dev_t_minor($rdev);
                $devlist .= "b:$major:$minor:$dev\n";
            }
            PVE::Tools::file_set_contents($devlist_file, $devlist);
        }
        return undef;
    }});


push @ARGV, 'help' if !scalar(@ARGV);

my $param = {};

if ((scalar(@ARGV) == 3) && ($ARGV[1] eq 'lxc') && ($ARGV[2] eq 'pre-start')) {
    $param->{name} = $ENV{'LXC_NAME'};
    die "got wrong name" if $param->{name} ne $ARGV[0];

    $param->{path} = $ENV{'LXC_CONFIG_FILE'};
    $param->{rootfs} = $ENV{'LXC_ROOTFS_PATH'};
    @ARGV = ();
} else {
    @ARGV = ('help');
}

our $cmddef = [ __PACKAGE__, 'lxc-pve-prestart-hook', [], $param];

__PACKAGE__->run_cli_handler();

Here the prints from mounting the CT - it seems the image is empty?!

Code:
root@pve:/var/lib/lxc# pct mount 101
mounted CT 101 in '/var/lib/lxc/101/rootfs'
root@pve:/var/lib/lxc# cd /var/lib/lxc/101/rootfs/
root@pve:/var/lib/lxc/101/rootfs# ls
dev
root@pve:/var/lib/lxc/101/rootfs# cd dev
root@pve:/var/lib/lxc/101/rootfs/dev# ls
root@pve:/var/lib/lxc/101/rootfs/dev#
 

oguz

Proxmox Staff Member
Staff member
Nov 19, 2018
4,435
532
118
is your filesystem mounted, that you use for the container fs?
 

clemens

New Member
Jun 27, 2019
13
0
1
35
ahaa indeed it was not mounted since it was not empty...

after deleting the stuff there and mounting it again, i was able to start the CT.

Code:
root@pve:/tank# zfs get mounted
NAME                              PROPERTY  VALUE    SOURCE
rpool                             mounted   yes      -
rpool/ROOT                        mounted   yes      -
rpool/ROOT/pve-1                  mounted   yes      -
rpool/data                        mounted   yes      -
tank                              mounted   no       -
tank/backup                       mounted   no       -
tank/backup/dump                  mounted   yes      -
tank/lxc                          mounted   no       -
tank/lxc/subvol-100-disk-0        mounted   no       -
tank/lxc/subvol-101-disk-0        mounted   no       -
tank/lxc/subvol-102-disk-0        mounted   yes      -
tank/lxc/subvol-102-disk-1        mounted   no       -
tank/vm                           mounted   yes      -
tank/vm/base-230-disk-0           mounted   -        -
tank/vm/base-230-disk-0@__base__  mounted   -        -
tank/vm/vm-202-disk-0             mounted   -        -
tank/vm/vm-231-disk-0             mounted   -        -
tank/vm/vm-999-disk-0             mounted   -        -
root@pve:/tank# zfs mount
rpool                       tank/backup                 tank/lxc/subvol-102-disk-0  tank/vm/vm-231-disk-0
rpool/data                  tank/backup/dump            tank/lxc/subvol-102-disk-1  tank/vm/vm-999-disk-0
rpool/ROOT                  tank/lxc                    tank/vm
rpool/ROOT/pve-1            tank/lxc/subvol-100-disk-0  tank/vm/base-230-disk-0
tank                        tank/lxc/subvol-101-disk-0  tank/vm/vm-202-disk-0
root@pve:/tank# zfs mount tank
cannot mount '/tank': directory is not empty
root@pve:/tank# zfs mount tank/lxc/subvol-100-disk-0
cannot mount '/tank/lxc/subvol-100-disk-0': directory is not empty
root@pve:/tank# rm -rf /tank/lxc/subvol-100-disk-0/
root@pve:/tank# zfs mount tank/lxc/subvol-100-disk-0
root@pve:/tank# zfs get mounted
NAME                              PROPERTY  VALUE    SOURCE
rpool                             mounted   yes      -
rpool/ROOT                        mounted   yes      -
rpool/ROOT/pve-1                  mounted   yes      -
rpool/data                        mounted   yes      -
tank                              mounted   no       -
tank/backup                       mounted   no       -
tank/backup/dump                  mounted   yes      -
tank/lxc                          mounted   no       -
tank/lxc/subvol-100-disk-0        mounted   yes      -
tank/lxc/subvol-101-disk-0        mounted   no       -
tank/lxc/subvol-102-disk-0        mounted   yes      -
tank/lxc/subvol-102-disk-1        mounted   no       -
tank/vm                           mounted   yes      -
tank/vm/base-230-disk-0           mounted   -        -
tank/vm/base-230-disk-0@__base__  mounted   -        -
tank/vm/vm-202-disk-0             mounted   -        -
tank/vm/vm-231-disk-0             mounted   -        -
tank/vm/vm-999-disk-0             mounted   -        -

although after restarting the Host, i have the same problem again...

edit: How can i ensure that the mountpoints remain empty before being mounted on startup?

edit2: after a restart i always end up with an epmty /dev/ folder at tank/lxc/subvol-CTID-disk-#/
 
Last edited:

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
6,197
864
148
hmm - seems there might be an ordering problem with the units - could you please attach the journal of the host since the last boot:
* `journalctl -b` (please make sure to remove sensitive information!)

please also paste the content of '/etc/pve/storage.cfg'

Thanks!
 

clemens

New Member
Jun 27, 2019
13
0
1
35
the journal print was too long so i attached it, i could not identify sensitive information...

here is the storage information:

Code:
root@pve:/tmp# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso
        maxfiles 1
        shared 0

zfspool: local-zfs
        disable
        pool rpool/data
        content images,rootdir
        sparse 1

cifs: b_main_transfer
        path /mnt/pve/b_main_transfer
        server 192.168.1.120
        share b_transfer/
        content snippets
        maxfiles 1
        username clemens

dir: transfer_proxmox_iso
        path /mnt/pve/b_main_transfer/_proxmox/ISO
        content iso
        maxfiles 1
        shared 0

zfspool: lxc
        pool tank/lxc
        content rootdir
        sparse 1

zfspool: vm
        pool tank/vm
        content images
        sparse 0

dir: transfer_proxmox_backup
        path /mnt/pve/b_main_transfer/_proxmox/backup
        content backup
        maxfiles 7
        shared 0

dir: transfer_proxmox_VM
        path /mnt/pve/b_main_transfer/_proxmox/VM
        content images
        shared 0

dir: backup
        path /tank/backup
        content backup
        maxfiles 7
        shared 0

root@pve:/tmp# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool   222G  1.95G   220G         -     0%     0%  1.00x  ONLINE  -
tank    888G  6.20G   882G         -     0%     0%  1.00x  ONLINE  -
root@pve:/tmp# zfs list
NAME                         USED  AVAIL  REFER  MOUNTPOINT
rpool                       1.95G   213G   104K  /rpool
rpool/ROOT                  1.94G   213G    96K  /rpool/ROOT
rpool/ROOT/pve-1            1.94G   213G  1.94G  /
rpool/data                    96K   213G    96K  /rpool/data
tank                         135G   725G   112K  /tank
tank/backup                  676M   725G    96K  /tank/backup
tank/backup/dump             676M   725G   676M  /tank/backup/dump
tank/lxc                    1.15G   725G   120K  /tank/lxc
tank/lxc/subvol-100-disk-0   597M  9.42G   597M  /tank/lxc/subvol-100-disk-0
tank/lxc/subvol-101-disk-0   577M  9.44G   577M  /tank/lxc/subvol-101-disk-0
tank/vm                      133G   725G    96K  /tank/vm
tank/vm/base-230-disk-0     34.1G   758G  1.11G  -
tank/vm/vm-202-disk-0       33.0G   757G  1.08G  -
tank/vm/vm-231-disk-0       33.0G   757G  1.10G  -
tank/vm/vm-999-disk-0       33.0G   757G  1.10G  -
root@pve:/tmp#

I am still experimenting with our CIFS share. Currently i am notable to create directories there, that is why it is a bit redundant...

At one point i unmounted my Container template storage while the CTs were running. In case this information is relevant.
Although it looks like the APPARMOR somehow is the culprit?

edit: removed attachment
 
Last edited:

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
6,197
864
148
hmm - the following lines might be an indication:
Code:
Jun 28 16:44:30 pve systemd[1]: Reached target Encrypted Volumes.
Jun 28 16:44:30 pve systemd[1]: Starting Import ZFS pools by cache file...
Jun 28 16:44:30 pve systemd[1]: Starting Activation of LVM2 logical volumes...
Jun 28 16:44:30 pve systemd[1]: Started Activation of LVM2 logical volumes.
Jun 28 16:44:30 pve zpool[1111]: no pools available to import
Jun 28 16:44:30 pve systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jun 28 16:44:30 pve systemd[1]: Started Import ZFS pools by cache file.
Jun 28 16:44:30 pve systemd[1]: Reached target ZFS pool import target.
Jun 28 16:44:30 pve systemd[1]: Starting Mount ZFS filesystems...

* do you have an '/etc/zfs/zpool.cache' file on your live-system?
* do you have one in the initramfs for your kernel `lsinitramfs $initrd` ?
* are they equal?
* try regenerating it (see https://github.com/zfsonlinux/zfs/wiki/FAQ)
* make sure to include it in the initrd (`update-initrd -k all -u`)

this hopefully makes sure that your zpools are available before the containers get started
 

clemens

New Member
Jun 27, 2019
13
0
1
35
indeed that fixed my problem:

Code:
Jun 28 18:10:31 pve systemd[1]: Started udev Wait for Complete Device Initialization.
Jun 28 18:10:31 pve systemd[1]: Starting Activation of LVM2 logical volumes...
Jun 28 18:10:31 pve systemd[1]: Started Activation of LVM2 logical volumes.
Jun 28 18:10:31 pve systemd[1]: Reached target Encrypted Volumes.
Jun 28 18:10:31 pve systemd[1]: Starting Import ZFS pools by cache file...
Jun 28 18:10:31 pve systemd[1]: Starting Activation of LVM2 logical volumes...
Jun 28 18:10:31 pve systemd[1]: Started Activation of LVM2 logical volumes.
Jun 28 18:10:31 pve systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jun 28 18:10:31 pve systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jun 28 18:10:31 pve systemd[1]: Reached target Local File Systems (Pre).
Jun 28 18:10:31 pve systemd[1]: Started Import ZFS pools by cache file.
Jun 28 18:10:31 pve systemd[1]: Reached target ZFS pool import target.
Jun 28 18:10:31 pve systemd[1]: Starting Mount ZFS filesystems...

i first made a copy of my zpool.cache. then executed this:

Code:
zpool set cachefile=/etc/zfs/zpool.cache tank

Code:
update-initramfs -k all -u

How can i prevent this issue in the future, after upgrading of setting up a fresh Host?
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
6,197
864
148
How can i prevent this issue in the future, after upgrading of setting up a fresh Host?
* make sure to set the cachefile-option when creating a new zpool
* since you usually reboot when there's a new kernel, and a the kernel's postinst-scripts update the initramfs this should work out automatically
 

clemens

New Member
Jun 27, 2019
13
0
1
35
during my install i left my two nvme drives untouched and only configured a zfs mirror with my two sata ssds for root.
in proxmox i created a zfs pool on my nvme drives.
i wanted to changed them, so i deleted them via commandline, since i could not figure out how to do it in the gui.
after that i created a new zpool on the commandline, not knowing about all imlications regarding the cache.

lessions learned. thanks for your patience and excellent support.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!