Container startet nur während System-Startup, kann nicht manuell gestartet werden

Bob22

New Member
Sep 11, 2019
17
0
1
32
Ich habe mit Proxmox VE 6.0-4 das Problem, das mein Container 106 nur während einem Reboot startet (da startet er automatisch). Nach einem Shutdown kann ich ihn nicht mehr erneut starten. Der Container liegt in einem LVM-Thin Storage, Template ist der Nextcloud-Turnkey Template, und ich habe auch nicht viel verändert:

Job for pve-container@106.service failed because the control process exited with error code.
See "systemctl status pve-container@106.service" and "journalctl -xe" for details.

TASK ERROR: command 'systemctl start pve-container@106' failed: exit code 1

Ich habe versucht, den Container im Foreground-Mode zu starten:

lxc-start: 106: conf.c: run_buffer: 335 Script exited with status 32
lxc-start: 106: start.c: lxc_init: 861 Failed to run lxc.hook.pre-start for container "106"
lxc-start: 106: start.c: __lxc_start: 1944 Failed to initialize container "106"
lxc-start: 106: tools/lxc_start.c: main: 330 The container failed to start

lxc-start: 106: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options

A start job for unit pve-container@106.service has begun execution.
--
-- The job identifier is 1085051.
Apr 06 01:30:20 server1 lxc-start[3771]: lxc-start: 106: lxccontainer.c: wait_on_daemonized_start: 856 No such file or directory - Failed to receive the container state
Apr 06 01:30:20 server1 lxc-start[3771]: lxc-start: 106: tools/lxc_start.c: main: 330 The container failed to start
Apr 06 01:30:20 server1 lxc-start[3771]: lxc-start: 106: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
Apr 06 01:30:20 server1 lxc-start[3771]: lxc-start: 106: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
Apr 06 01:30:20 server1 systemd[1]: pve-container@106.service: Control process exited, code=exited, status=1/FAILURE
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- An ExecStart= process belonging to unit pve-container@106.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
Apr 06 01:30:20 server1 systemd[1]: pve-container@106.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit pve-container@106.service has entered the 'failed' state with result 'exit-code'.
Apr 06 01:30:20 server1 systemd[1]: Failed to start PVE LXC Container: 106.
-- Subject: A start job for unit pve-container@106.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pve-container@106.service has finished with a failure.
--
-- The job identifier is 1085051 and the job result is failed.
Apr 06 01:30:20 server1 pvedaemon[3769]: command 'systemctl start pve-container@106' failed: exit code 1
Apr 06 01:30:20 server1 pvedaemon[3250]: <root@pam> end task UPID:server1:00000EB9:0008D4DB:5E8A6A0A:vzstart:106:root@pam: command 'systemctl start pve-container@106' failed: exit code 1
Apr 06 01:30:20 server1 pvestatd[3169]: unable to get PID for CT 106 (not running?)
Apr 06 01:31:00 server1 systemd[1]: Starting Proxmox VE replication runner...
-- Subject: A start job for unit pvesr.service has begun execution
-- Defined-By: systemd

-- Support: https://www.debian.org/support

Ich habe das Gefühl, dass es mit meinem Storage ein Problem gibt, kann aber nicht erkennen, wo genau es liegt.

# vgs:
VG #PV #LV #SN Attr VSize VFree
pve 1 4 0 wz--n- <223.07g <16.00g


# lvs:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- <141.43g 3.67 1.30
root pve -wi-ao---- 55.75g
swap pve -wi-ao---- 7.00g

vm-106-disk-0 pve Vwi-a-tz-- 100.00g data 5.20

# /etc/pve/storage.cfg:

dir: local

path /var/lib/vz
content vztmpl,iso,snippets,images,rootdir
maxfiles 0
shared 0

dir: BigData
path /mnt/data
content images,rootdir,vztmpl,iso,backup
maxfiles 3
shared 0

lvmthin: Thinpool
thinpool data
vgname pve

content images,rootdir
 
Hier das logfile vom Versuch, den Container im Foreground-Mode zu starten:

xc-start 106 20200405234201.957 INFO lsm - lsm/lsm.c:lsm_init:50 - LSM security driver AppArmor
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:759 - Processing "reject_force_umount # comment this to allow umount -f; not recommended"
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for reject_force_umount action 0(kill)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for reject_force_umount action 0(kill)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for reject_force_umount action 0(kill)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:do_resolve_add_rule:505 - Set seccomp rule to reject force umounts
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for reject_force_umount action 0(kill)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:759 - Processing "[all]"
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:759 - Processing "kexec_load errno 1"
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for kexec_load action 327681(errno)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for kexec_load action 327681(errno)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for kexec_load action 327681(errno)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for kexec_load action 327681(errno)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:759 - Processing "open_by_handle_at errno 1"
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for open_by_handle_at action 327681(errno)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for open_by_handle_at action 327681(errno)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for open_by_handle_at action 327681(errno)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for open_by_handle_at action 327681(errno)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:759 - Processing "init_module errno 1"
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for init_module action 327681(errno)
lxc-start 106 20200405234201.957 INFO seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for init_module action 327681(errno)
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for init_module action 327681(errno)
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for init_module action 327681(errno)
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:759 - Processing "finit_module errno 1"
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for finit_module action 327681(errno)
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for finit_module action 327681(errno)
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for finit_module action 327681(errno)
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for finit_module action 327681(errno)
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:759 - Processing "delete_module errno 1"
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:937 - Added native rule for arch 0 for delete_module action 327681(errno)
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:946 - Added compat rule for arch 1073741827 for delete_module action 327681(errno)
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:956 - Added compat rule for arch 1073741886 for delete_module action 327681(errno)
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:966 - Added native rule for arch -1073741762 for delete_module action 327681(errno)
lxc-start 106 20200405234201.958 INFO seccomp - seccomp.c:parse_config_v2:970 - Merging compat seccomp contexts into main context
lxc-start 106 20200405234201.958 INFO conf - conf.c:run_script_argv:356 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "106", config section "lxc"
lxc-start 106 20200405234202.925 DEBUG conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 106 lxc pre-start with output: mount: /var/lib/lxc/106/rootfs: special device /dev/pve/vm-106-disk-0 does not exist.

lxc-start 106 20200405234202.925 DEBUG conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 106 lxc pre-start with output: command 'mount /dev/pve/vm-106-disk-0 /var/lib/lxc/106/rootfs//' failed: exit code 32

lxc-start 106 20200405234202.948 ERROR conf - conf.c:run_buffer:335 - Script exited with status 32
lxc-start 106 20200405234202.948 ERROR start - start.c:lxc_init:861 - Failed to run lxc.hook.pre-start for container "106"
lxc-start 106 20200405234202.949 ERROR start - start.c:__lxc_start:1944 - Failed to initialize container "106"
lxc-start 106 20200405234202.949 ERROR lxc_start - tools/lxc_start.c:main:330 - The container failed to start
lxc-start 106 20200405234202.949 ERROR lxc_start - tools/lxc_start.c:main:336 - Additional information can be obtained by setting the --logfile and --logpriority options
 
kann das container-root gemounted werden?
(pct mount 106)
 
Sorry für den Doppelthread und die späte Antwort; anfangs wurden meine Posts scheinbar vom Board gelöscht und ich dachte, dass er aufgrund der Länge als Spam erkannt worden wäre. Deswegen dachte ich auch später, dass ich schlicht nichts hier posten kann und habe bis jetzt nicht mehr nachgeschaut.

Hier der Output von pct mount 106:
Code:
mount: /var/lib/lxc/106/rootfs: special device /dev/pve/vm-106-disk-0 does not exist.
mounting container failed
command 'mount /dev/pve/vm-106-disk-0 /var/lib/lxc/106/rootfs//' failed: exit code 32
 
hmm - bitte nochmal das `lvs -a` `vgs -a` output posten - und auch in `dmesg` nachschauen ob ein problem sichtbar ist
 
Wie durch Zauberhand ist der Container diesmal gestartet - dabei habe ich nichts verändert! Ich poste dennoch hier die gewünschten Logs, vielleicht lässt sich ja doch was auffälliges finden!

#lvs -a
lvs -a.PNG
#vgs -a
vgs -a.PNG

Und hier die Einträge, die ich in dmesg zu 106 gefunden habe; sehr viele Einträge zu AppArmor, sowie eine große Anzahl zu "Port 6" und "Port 4" - welche Ports sind da gemeint? Die respektiven Ports an meinem Netzwerk-Switch sind jedenfalls deaktiviert.
Code:
[308572.590635] audit: type=1400 audit(1586932751.835:850): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=9713 comm="(ionclean)" flags="rw, rslave"
[308572.592128] audit: type=1400 audit(1586932751.835:851): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-100_</var/lib/lxc>" name="/" pid=9712 comm="(ionclean)" flags="rw, rslave"
[308787.238101] vmbr0: port 6(veth102i0) entered disabled state
[308831.390253] audit: type=1400 audit(1586933010.630:852): apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-102_</var/lib/lxc>" pid=15105 comm="apparmor_parser"
[308832.779951] vmbr0: port 6(veth102i0) entered disabled state
[308832.783788] device veth102i0 left promiscuous mode
[308832.783796] vmbr0: port 6(veth102i0) entered disabled state
[308834.102511] EXT4-fs (loop1): mounted filesystem with ordered data mode. Opts: (null)
[308834.334524] audit: type=1400 audit(1586933013.574:853): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-102_</var/lib/lxc>" pid=15169 comm="apparmor_parser"
[308835.235170] vmbr0: port 6(veth102i0) entered blocking state
[308835.235174] vmbr0: port 6(veth102i0) entered disabled state
[308835.235492] device veth102i0 entered promiscuous mode
[308835.496066] eth0: renamed from veth5LYE5O
[308836.071639] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[308836.071691] vmbr0: port 6(veth102i0) entered blocking state
[308836.071693] vmbr0: port 6(veth102i0) entered forwarding state
[309186.385395] audit: type=1400 audit(1586933365.624:854): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=19777 comm="(pachectl)" flags="rw, rslave"
[309186.445472] audit: type=1400 audit(1586933365.684:855): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=19778 comm="(un-parts)" flags="rw, rslave"
[309186.497774] audit: type=1400 audit(1586933365.736:856): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=19811 comm="(kill)" flags="rw, rslave"
[309186.679074] audit: type=1400 audit(1586933365.916:857): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=19845 comm="(un-parts)" flags="rw, rslave"
[309188.395128] vmbr0: port 4(veth106i0) entered disabled state
[309188.617547] audit: type=1400 audit(1586933367.856:858): apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-106_</var/lib/lxc>" pid=19910 comm="apparmor_parser"
[309189.994729] vmbr0: port 4(veth106i0) entered disabled state
[309189.998122] device veth106i0 left promiscuous mode
[309189.998128] vmbr0: port 4(veth106i0) entered disabled state
[309205.765480] EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null)
[309205.987189] audit: type=1400 audit(1586933385.228:859): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-106_</var/lib/lxc>" pid=20057 comm="apparmor_parser"
[309206.842860] vmbr0: port 4(veth106i0) entered blocking state
[309206.842863] vmbr0: port 4(veth106i0) entered disabled state
[309206.842991] device veth106i0 entered promiscuous mode
[309207.016930] eth0: renamed from vethVY5O9O
[309207.721031] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[309207.721078] vmbr0: port 4(veth106i0) entered blocking state
[309207.721080] vmbr0: port 4(veth106i0) entered forwarding state
[309207.970985] audit: type=1400 audit(1586933387.212:860): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=20432 comm="(un-parts)" flags="rw, rslave"
[309207.990172] audit: type=1400 audit(1586933387.232:861): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=20436 comm="(install)" flags="rw, rslave"
[309208.003899] audit: type=1400 audit(1586933387.244:862): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=20435 comm="(pachectl)" flags="rw, rslave"
[309208.009104] audit: type=1400 audit(1586933387.248:863): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=20444 comm="(s-server)" flags="rw, rslave"
[309208.047738] audit: type=1400 audit(1586933387.288:864): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=20450 comm="(sh)" flags="rw, rslave"
[309208.074652] audit: type=1400 audit(1586933387.316:865): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=20465 comm="(un-parts)" flags="rw, rslave"
[309208.103606] audit: type=1400 audit(1586933387.344:866): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=20476 comm="(sh)" flags="rw, rslave"
[309208.384444] audit: type=1400 audit(1586933387.624:867): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=20662 comm="(mysqld)" flags="rw, rslave"
[309209.034031] audit: type=1400 audit(1586933388.276:868): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-106_</var/lib/lxc>" name="/" pid=20788 comm="(an-start)" flags="rw, rslave"

Liebe Grüße
 
gut wenn das problem weg ist :) bitte den thread als 'SOLVED' markieren (hilft potentiell anderen usern )

vmbr0: port 6(veth102i0)
das sind die internen bridge-ports an denen die container hängen (virtuelle interfaces) - veth<VMID>i<interfaceid> - sprich das ist das erste interface von container 102. und die Meldungen an sich kommen z.B. vor wenn ein container gestartet wird.
sprich nichts außergewöhnliches
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!