[SOLVED] Can't start LXC-Container

BeatOne

Member
Mar 4, 2020
39
2
8
26
Hello,

after a manually stop of a container it isn't possible to start it again.

Code:
root@Proxmox:~# systemctl status pve-container@100.service

● pve-container@100.service - PVE LXC Container: 100
   Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Fri 2020-03-13 06:45:31 CET; 4min 0s ago
     Docs: man:lxc-start
           man:lxc
           man:pct
  Process: 26512 ExecStart=/usr/bin/lxc-start -n 100 (code=exited, status=1/FAILURE)

Mar 13 06:45:31 Proxmox systemd[1]: Starting PVE LXC Container: 100...
Mar 13 06:45:31 Proxmox lxc-start[26512]: lxc-start: 100: lxccontainer.c: wait_on_daemonized_start: 865 No such file or directory - Failed to receive the container state
Mar 13 06:45:31 Proxmox lxc-start[26512]: lxc-start: 100: tools/lxc_start.c: main: 329 The container failed to start
Mar 13 06:45:31 Proxmox lxc-start[26512]: lxc-start: 100: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode
Mar 13 06:45:31 Proxmox lxc-start[26512]: lxc-start: 100: tools/lxc_start.c: main: 335 Additional information can be obtained by setting the --logfile and --logpriority options
Mar 13 06:45:31 Proxmox systemd[1]: pve-container@100.service: Control process exited, code=exited, status=1/FAILURE
Mar 13 06:45:31 Proxmox systemd[1]: pve-container@100.service: Failed with result 'exit-code'.
Mar 13 06:45:31 Proxmox systemd[1]: Failed to start PVE LXC Container: 100.
root@Proxmox:~#

Is there anything i can do to get this Machine running?

Thanks for help :)
 

Moayad

Proxmox Staff Member
Staff member
Jan 2, 2020
2,127
173
68
29
Vienna
shop.maurer-it.com
Hi,

Please post the complete logs to understand what is going on, lxc-start -n 100 -F -l DEBUG -o /tmp/lxc-100.log and pveverion -v and pct config 100 as well.
 

BeatOne

Member
Mar 4, 2020
39
2
8
26
Hi,

Please post the complete logs to understand what is going on, lxc-start -n 100 -F -l DEBUG -o /tmp/lxc-100.log and pveverion -v and pct config 100 as well.


Output of lxc-start -n 100 -F -l DEBUG -o /tmp/lxc-100.log :

Code:
root@Proxmox:~# lxc-start -n 100 -F -l DEBUG -o /tmp/lxc-100.log
lxc-start: 100: conf.c: run_buffer: 352 Script exited with status 2
lxc-start: 100: start.c: lxc_init: 897 Failed to run lxc.hook.pre-start for cont                                                                                                                                                             ainer "100"
lxc-start: 100: start.c: __lxc_start: 2032 Failed to initialize container "100"
Segmentation fault
root@Proxmox:~#

Output of pveverion -v :

Code:
root@Proxmox:~# pveverion -v
-bash: pveverion: command not found
root@Proxmox:~#

Output of pct config 100:

Code:
root@Proxmox:~# pct config 100
arch: amd64
cores: 1
hostname: TeamSpeak
memory: 512
nameserver: 192.168.178.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.178.1,hwaddr=0E:19:8D:52:CE:FF,ip=192.168.178.211/24,type=veth
ostype: centos
parent: ServerSnapshot
rootfs: local-zfs:subvol-100-disk-0,size=8G
swap: 512
unprivileged: 1
root@Proxmox:~#
 

oguz

Proxmox Retired Staff
Retired Staff
Nov 19, 2018
5,207
705
118
Output of lxc-start -n 100 -F -l DEBUG -o /tmp/lxc-100.log :
please post the contents of the file /tmp/lxc-100.log (that is the debug log we need. the command you ran creates that. sorry for the misunderstanding). you can attach that here as txt

Output of pveverion -v :
seems like this was mistyped, it should be pveversion -v
 

BeatOne

Member
Mar 4, 2020
39
2
8
26
Plus, I've been acting really weird.

when i log back on to the webinterface after a day or more i see only grey question marks everywhere and the server does not respond to any of my entries.
also the command "htop" cannot be listed on the console via ssh. i have to restart the server (manually, via console or webinterface it does not respond)

Can somebody help me? I'm desperate here.
 

BeatOne

Member
Mar 4, 2020
39
2
8
26
please post the contents of the file /tmp/lxc-100.log (that is the debug log we need. the command you ran creates that. sorry for the misunderstanding). you can attach that here as txt


seems like this was mistyped, it should be pveversion -v


Content of /tmp/lxc-100.log :
Code:
root@Proxmox:~# cat /tmp/lxc-100.log
lxc-start 100 20200313162200.178 INFO     confile - confile.c:set_config_idmaps:2003 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 100 20200313162200.178 INFO     confile - confile.c:set_config_idmaps:2003 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 100 20200313162200.179 INFO     lsm - lsm/lsm.c:lsm_init:50 - LSM security driver AppArmor
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:789 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:do_resolve_add_rule:535 - Set seccomp rule to reject force umounts
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:975 - Added native rule for arch 0 for reject_force_umount action 0(kill)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:do_resolve_add_rule:535 - Set seccomp rule to reject force umounts
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:984 - Added compat rule for arch 1073741827 for reject_force_umount action 0(kill)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:do_resolve_add_rule:535 - Set seccomp rule to reject force umounts
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:994 - Added compat rule for arch 1073741886 for reject_force_umount action 0(kill)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:do_resolve_add_rule:535 - Set seccomp rule to reject force umounts
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:1004 - Added native rule for arch -1073741762 for reject_force_umount action 0(kill)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:789 - Processing "[all]"
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:789 - Processing "kexec_load errno 1"
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:975 - Added native rule for arch 0 for kexec_load action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:984 - Added compat rule for arch 1073741827 for kexec_load action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:994 - Added compat rule for arch 1073741886 for kexec_load action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:1004 - Added native rule for arch -1073741762 for kexec_load action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:789 - Processing "open_by_handle_at errno 1"
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:975 - Added native rule for arch 0 for open_by_handle_at action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:984 - Added compat rule for arch 1073741827 for open_by_handle_at action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:994 - Added compat rule for arch 1073741886 for open_by_handle_at action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:1004 - Added native rule for arch -1073741762 for open_by_handle_at action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:789 - Processing "init_module errno 1"
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:975 - Added native rule for arch 0 for init_module action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:984 - Added compat rule for arch 1073741827 for init_module action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:994 - Added compat rule for arch 1073741886 for init_module action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:1004 - Added native rule for arch -1073741762 for init_module action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:789 - Processing "finit_module errno 1"
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:975 - Added native rule for arch 0 for finit_module action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:984 - Added compat rule for arch 1073741827 for finit_module action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:994 - Added compat rule for arch 1073741886 for finit_module action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:1004 - Added native rule for arch -1073741762 for finit_module action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:789 - Processing "delete_module errno 1"
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:975 - Added native rule for arch 0 for delete_module action 327681(errno)
lxc-start 100 20200313162200.179 INFO     seccomp - seccomp.c:parse_config_v2:984 - Added compat rule for arch 1073741827 for delete_module action 327681(errno)
lxc-start 100 20200313162200.180 INFO     seccomp - seccomp.c:parse_config_v2:994 - Added compat rule for arch 1073741886 for delete_module action 327681(errno)
lxc-start 100 20200313162200.180 INFO     seccomp - seccomp.c:parse_config_v2:1004 - Added native rule for arch -1073741762 for delete_module action 327681(errno)
lxc-start 100 20200313162200.180 INFO     seccomp - seccomp.c:parse_config_v2:789 - Processing "keyctl errno 38"
lxc-start 100 20200313162200.180 INFO     seccomp - seccomp.c:parse_config_v2:975 - Added native rule for arch 0 for keyctl action 327718(errno)
lxc-start 100 20200313162200.180 INFO     seccomp - seccomp.c:parse_config_v2:984 - Added compat rule for arch 1073741827 for keyctl action 327718(errno)
lxc-start 100 20200313162200.180 INFO     seccomp - seccomp.c:parse_config_v2:994 - Added compat rule for arch 1073741886 for keyctl action 327718(errno)
lxc-start 100 20200313162200.180 INFO     seccomp - seccomp.c:parse_config_v2:1004 - Added native rule for arch -1073741762 for keyctl action 327718(errno)
lxc-start 100 20200313162200.180 INFO     seccomp - seccomp.c:parse_config_v2:1008 - Merging compat seccomp contexts into main context
lxc-start 100 20200313162200.180 INFO     conf - conf.c:run_script_argv:372 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "100", config section "lxc"
lxc-start 100 20200313162200.531 DEBUG    conf - conf.c:run_buffer:340 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100 lxc pre-start produced output: unable to detect OS distribution

lxc-start 100 20200313162200.537 ERROR    conf - conf.c:run_buffer:352 - Script exited with status 2
lxc-start 100 20200313162200.537 ERROR    start - start.c:lxc_init:897 - Failed to run lxc.hook.pre-start for container "100"
lxc-start 100 20200313162200.537 ERROR    start - start.c:__lxc_start:2032 - Failed to initialize container "100"
root@Proxmox:~#

Output of pveversion -v :

Code:
root@Proxmox:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
root@Proxmox:~#
 

oguz

Proxmox Retired Staff
Retired Staff
Nov 19, 2018
5,207
705
118
okay i see this line: Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100 lxc pre-start produced output: unable to detect OS distribution

which usually means the zpool isn't being mounted. (your container is on local-zfs).

following commands should help us debug further:
Code:
zfs get all
zpool status

you can also try: zpool import POOLNAME where POOLNAME is the name of your pool
 

BeatOne

Member
Mar 4, 2020
39
2
8
26
okay i see this line: Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100 lxc pre-start produced output: unable to detect OS distribution

which usually means the zpool isn't being mounted. (your container is on local-zfs).

following commands should help us debug further:
Code:
zfs get all
zpool status

you can also try: zpool import POOLNAME where POOLNAME is the name of your pool




Output of zpool status:

Code:
root@Proxmox:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME                                                     STATE     READ WRITE CKSUM
        rpool                                                    ONLINE       0     0     0
          mirror-0                                               ONLINE       0     0     0
            ata-Samsung_SSD_750_EVO_500GB_S36SNWBH519880V-part3  ONLINE       0     0     0
            ata-Samsung_SSD_850_EVO_500GB_S2RBNX0JC88520H-part3  ONLINE       0     0     0

errors: No known data errors
root@Proxmox:~#

Output of zpool import rpool :
Code:
root@Proxmox:~# zpool import rpool
cannot import 'rpool': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name
root@Proxmox:~#
 

BeatOne

Member
Mar 4, 2020
39
2
8
26
Output of zpool status:

Code:
root@Proxmox:~# zpool status
  pool: rpool
state: ONLINE
  scan: none requested
config:

        NAME                                                     STATE     READ WRITE CKSUM
        rpool                                                    ONLINE       0     0     0
          mirror-0                                               ONLINE       0     0     0
            ata-Samsung_SSD_750_EVO_500GB_S36SNWBH519880V-part3  ONLINE       0     0     0
            ata-Samsung_SSD_850_EVO_500GB_S2RBNX0JC88520H-part3  ONLINE       0     0     0

errors: No known data errors
root@Proxmox:~#

Output of zpool import rpool :
Code:
root@Proxmox:~# zpool import rpool
cannot import 'rpool': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name
root@Proxmox:~#

Output of zfs get all:
Code:
root@Proxmox:~# zfs get all
NAME                                         PROPERTY              VALUE                          SOURCE
rpool                                        type                  filesystem                     -
rpool                                        creation              Thu Mar 12 20:49 2020          -
rpool                                        used                  7.20G                          -
rpool                                        available             442G                           -
rpool                                        referenced            104K                           -
rpool                                        compressratio         1.23x                          -
rpool                                        mounted               yes                            -
rpool                                        quota                 none                           default
rpool                                        reservation           none                           default
rpool                                        recordsize            128K                           default
rpool                                        mountpoint            /rpool                         default
rpool                                        sharenfs              off                            default
rpool                                        checksum              on                             default
rpool                                        compression           on                             local
rpool                                        atime                 off                            local
rpool                                        devices               on                             default
rpool                                        exec                  on                             default
rpool                                        setuid                on                             default
rpool                                        readonly              off                            default
rpool                                        zoned                 off                            default
rpool                                        snapdir               hidden                         default
rpool                                        aclinherit            restricted                     default
rpool                                        createtxg             1                              -
rpool                                        canmount              on                             default
rpool                                        xattr                 on                             default
rpool                                        copies                1                              default
rpool                                        version               5                              -
rpool                                        utf8only              off                            -
rpool                                        normalization         none                           -
rpool                                        casesensitivity       sensitive                      -
rpool                                        vscan                 off                            default
rpool                                        nbmand                off                            default
rpool                                        sharesmb              off                            default
rpool                                        refquota              none                           default
rpool                                        refreservation        none                           default
rpool                                        guid                  16021812935261962648           -
rpool                                        primarycache          all                            default
rpool                                        secondarycache        all                            default
rpool                                        usedbysnapshots       0B                             -
rpool                                        usedbydataset         104K                           -
rpool                                        usedbychildren        7.20G                          -
rpool                                        usedbyrefreservation  0B                             -
rpool                                        logbias               latency                        default
rpool                                        objsetid              54                             -
rpool                                        dedup                 off                            default
rpool                                        mlslabel              none                           default
rpool                                        sync                  standard                       local
rpool                                        dnodesize             legacy                         default
rpool                                        refcompressratio      1.00x                          -
rpool                                        written               104K                           -
rpool                                        logicalused           8.73G                          -
rpool                                        logicalreferenced     46K                            -
rpool                                        volmode               default                        default
rpool                                        filesystem_limit      none                           default
rpool                                        snapshot_limit        none                           default
rpool                                        filesystem_count      none                           default
rpool                                        snapshot_count        none                           default
rpool                                        snapdev               hidden                         default
rpool                                        acltype               off                            default
rpool                                        context               none                           default
rpool                                        fscontext             none                           default
rpool                                        defcontext            none                           default
rpool                                        rootcontext           none                           default






I DELETED A PART OF THIS LIST 

root@Proxmox:~#
 

juju01

Member
May 16, 2020
80
3
13
What was the resolution to this? it is marked as solved. I am having the exact same problem. All containers greyed out after reboot and cant start. all zfs.
 

Elliott Partridge

Active Member
Oct 7, 2018
52
10
28
I just encountered this issue as well. What worked for me was the following:
  1. Delete the (empty) subvol mount point:
    rmdir /tank/vmdata/subvol-100-disk-0
  2. Mount the ZFS filesystem:
    zfs mount tank/vmdata/subvol-100-disk-0
This seems like a workaround, and I'm still not sure what the root issue is. But this worked in a pinch to get some containers back up and running.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!