newly installed containers won't start

jbedgood

New Member
Nov 10, 2020
15
0
1
51
I've just installed proxmox and tried several container images. None of them will start on either of the two of my clustered servers.

Here is the error I get.

Code:
root@kvm2:~# lxc-start -n 103 -F -l DEBUG -o /tmp/lxc.log
lxc-start: 103: conf.c: run_buffer: 323 Script exited with status 255
lxc-start: 103: start.c: lxc_init: 804 Failed to run lxc.hook.pre-start for container "103"
lxc-start: 103: start.c: __lxc_start: 1903 Failed to initialize container "103"
lxc-start: 103: conf.c: run_buffer: 323 Script exited with status 1
lxc-start: 103: start.c: lxc_end: 971 Failed to run lxc.hook.post-stop for container "103"
lxc-start: 103: tools/lxc_start.c: main: 308 The container failed to start
lxc-start: 103: tools/lxc_start.c: main: 314 Additional information can be obtained by setting the --logfile and --logpriority options

Here is the output from the log file:

Code:
root@kvm2:~# tail /tmp/lxc.log
lxc-start 103 20201110171257.178 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "103", config section "lxc"
lxc-start 103 20201110171257.681 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "103", config section "lxc"
lxc-start 103 20201110171258.411 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 103 lxc post-stop produced output: umount: /var/lib/lxc/103/rootfs: not mounted

lxc-start 103 20201110171258.411 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 103 lxc post-stop produced output: command 'umount --recursive -- /var/lib/lxc/103/rootfs' failed: exit code 1

lxc-start 103 20201110171258.424 ERROR    conf - conf.c:run_buffer:323 - Script exited with status 1
lxc-start 103 20201110171258.424 ERROR    start - start.c:lxc_end:971 - Failed to run lxc.hook.post-stop for container "103"
lxc-start 103 20201110171258.424 ERROR    lxc_start - tools/lxc_start.c:main:308 - The container failed to start
lxc-start 103 20201110171258.424 ERROR    lxc_start - tools/lxc_start.c:main:314 - Additional information can be obtained by setting the --logfile and --logpriority options

I'm new to containers and proxmox but I'm confused as to why they aren't starting when I can install VM's and they work fine. Any help would be greatly appreciated.
 
hi,

can you please post the full log and your container configuration? (pct config CTID)
 
  • Like
Reactions: bobmc
hi,

can you please post the full log and your container configuration? (pct config CTID)
That was the complete log from the lxc.log I created trying to troubleshoot it. I've rebooted since then so that log is gone but here is the config.

Code:
root@kvm2:~# pct config 103
arch: amd64
cores: 1
hostname: CT-Test
memory: 512
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=CE:7A:21:6A:4F:9E,ip=dhcp,type=veth
ostype: ubuntu
rootfs: Syn1:103/vm-103-disk-0.raw,size=8G
swap: 512
unprivileged: 1
 
tail only outputs the last bits of the file. can you run lxc-start -n 103 -lDEBUG -o/tmp/lxc.log and attach the file here?

if i had to guess - your storage could be the culprit (NFS?)
 
tail only outputs the last bits of the file. can you run lxc-start -n 103 -lDEBUG -o/tmp/lxc.log and attach the file here?

if i had to guess - your storage could be the culprit (NFS?)
Here is the complete log. Not much in it like before. I can mount the raw file system and unmount it.

Code:
root@kvm2:/tmp# cat lxc.log
lxc-start 103 20201111142751.743 INFO     confile - confile.c:set_config_idmaps:2051 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 103 20201111142751.743 INFO     confile - confile.c:set_config_idmaps:2051 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 103 20201111142751.744 INFO     lxccontainer - lxccontainer.c:do_lxcapi_start:972 - Set process title to [lxc monitor] /var/lib/lxc 103
lxc-start 103 20201111142751.744 DEBUG    lxccontainer - lxccontainer.c:wait_on_daemonized_start:830 - First child 135935 exited
lxc-start 103 20201111142751.745 INFO     lsm - lsm/lsm.c:lsm_init:29 - LSM security driver AppArmor
lxc-start 103 20201111142751.745 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "103", config section "lxc"
lxc-start 103 20201111142836.591 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 103 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: can't read superblock on /dev/loop0.

lxc-start 103 20201111142836.617 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 103 lxc pre-start produced output: command 'mount /dev/loop0 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

lxc-start 103 20201111142836.630 ERROR    conf - conf.c:run_buffer:323 - Script exited with status 255
lxc-start 103 20201111142836.630 ERROR    start - start.c:lxc_init:804 - Failed to run lxc.hook.pre-start for container "103"
lxc-start 103 20201111142836.630 ERROR    start - start.c:__lxc_start:1903 - Failed to initialize container "103"
lxc-start 103 20201111142836.630 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "103", config section "lxc"
lxc-start 103 20201111142837.133 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "103", config section "lxc"
lxc-start 103 20201111142837.859 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 103 lxc post-stop produced output: umount: /var/lib/lxc/103/rootfs: not mounted

lxc-start 103 20201111142837.859 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 103 lxc post-stop produced output: command 'umount --recursive -- /var/lib/lxc/103/rootfs' failed: exit code 1

lxc-start 103 20201111142837.871 ERROR    conf - conf.c:run_buffer:323 - Script exited with status 1
lxc-start 103 20201111142837.871 ERROR    start - start.c:lxc_end:971 - Failed to run lxc.hook.post-stop for container "103"
lxc-start 103 20201111142837.871 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:843 - No such file or directory - Failed to receive the container state
lxc-start 103 20201111142837.871 ERROR    lxc_start - tools/lxc_start.c:main:308 - The container failed to start
lxc-start 103 20201111142837.871 ERROR    lxc_start - tools/lxc_start.c:main:311 - To get more details, run the container in foreground mode
lxc-start 103 20201111142837.871 ERROR    lxc_start - tools/lxc_start.c:main:314 - Additional information can be obtained by setting the --logfile and --logpriority options
root@kvm2:/tmp#
 
could you try moving your container disk to a local storage and see if it works?

that way we know if the problem is storage related.
 
could you try moving your container disk to a local storage and see if it works?

that way we know if the problem is storage related.
when trying to move the disk this is what I got.

Code:
Logical volume "vm-103-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks:    4096/2097152               done                           
Creating filesystem with 2097152 4k blocks and 524288 inodes
Filesystem UUID: ec07331f-2bda-40e5-951e-3e0eae884045
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables:  0/64     done                           
Writing inode tables:  0/64     done                           
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information:  0/64     done

mount: /var/lib/lxc/103/.copy-volume-2: cannot mount /dev/loop0 read-only.
Specified filename /var/lib/lxc/103/.copy-volume-1 does not exist.
  Logical volume "vm-103-disk-0" successfully removed
TASK ERROR: command 'mount -o ro /dev/loop0 /var/lib/lxc/103/.copy-volume-2//' failed: exit code 32
 
So based on the thought that it might be related to my storage. I created another container and put it on local storage and it works fine. How do I determine what needs to be done to get the containers to work on my remote storage?
 
I created another container and put it on local storage and it works fine.
okay - as i thought.

How do I determine what needs to be done to get the containers to work on my remote storage?
could you run fsck on your storage to check if the disk partitions aren't broken?
 
okay - as i thought.


could you run fsck on your storage to check if the disk partitions aren't broken?
I'm not sure that is the issue since I have 2 different network storage devices and both of them exibit the same issue of containers with disks on them won't start. I am still researching if fsck can be run on the remote network storage but if anyone can think of any other reason I'd greatly appreciate it.
 
I'm not sure that is the issue since I have 2 different network storage devices and both of them exibit the same issue of containers with disks on them won't start
could you post your /etc/pve/storage.cfg file contents?
 
could you post your /etc/pve/storage.cfg file contents?
Code:
root@kvm2:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

cifs: Syn2
        path /mnt/pve/Syn2
        server 172.16.0.22
        share prox
        content backup,snippets,images,iso,rootdir,vztmpl
        domain workgroup
        maxfiles 1
        username *

cifs: Syn1
        path /mnt/pve/Syn1
        server 172.16.0.21
        share vm1
        content rootdir,iso,backup,images,snippets,vztmpl
        domain workgroup
        maxfiles 1
        username *

root@kvm2:~
 
cifs: Syn2 path /mnt/pve/Syn2 server 172.16.0.22 share prox content backup,snippets,images,iso,rootdir,vztmpl domain workgroup maxfiles 1 username * cifs: Syn1 path /mnt/pve/Syn1 server 172.16.0.21 share vm1 content rootdir,iso,backup,images,snippets,vztmpl domain workgroup maxfiles 1 username *
the cifs server IPs 172.16.0.21 and 22, where are they hosted?

could you also post some info about your network config? /etc/network/interfaces, ip a && ip r would be useful. (anonymize where necessary)
 
the cifs server IPs 172.16.0.21 and 22, where are they hosted?

could you also post some info about your network config? /etc/network/interfaces, ip a && ip r would be useful. (anonymize where necessary)
The CIFS are my local synology NAS devices. They are on the same switch as my proxmox servers.

/etc/network/interfaces
Code:
auto lo
iface lo inet loopback

iface enp1s0f0 inet manual

iface enp1s0f1 inet manual

iface ens6f0 inet manual

iface ens6f1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 172.16.0.31/23
        gateway 172.16.0.1
        bridge-ports enp1s0f0 enp1s0f1
        bridge-stp off
        bridge-fd 0

root@kvm2:~#

ip a

Code:
root@kvm2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:70 brd ff:ff:ff:ff:ff:ff
3: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:72 brd ff:ff:ff:ff:ff:ff
4: ens6f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 70:ca:9b:03:ad:b2 brd ff:ff:ff:ff:ff:ff
5: ens6f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 70:ca:9b:03:ad:b4 brd ff:ff:ff:ff:ff:ff
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:70 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.31/23 brd 172.16.1.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::32f7:dff:fe99:c270/64 scope link
       valid_lft forever preferred_lft forever
root@kvm2:~#

ip r
Code:
root@kvm2:~# ip r
default via 172.16.0.1 dev vmbr0 onlink
172.16.0.0/23 dev vmbr0 proto kernel scope link src 172.16.0.31
root@kvm2:~#
 
Code:
auto lo
iface lo inet loopback

iface enp1s0f0 inet manual

iface enp1s0f1 inet manual

iface ens6f0 inet manual

iface ens6f1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 172.16.0.31/23
        gateway 172.16.0.1
        bridge-ports enp1s0f0 enp1s0f1
        bridge-stp off
        bridge-fd 0

you have two bridge ports defined with the same IP - probably you want to have a bond setup instead of that. [0]

my guess is that the bridge config with 2 ports is causing connectivity issues.

[0]: https://pve.proxmox.com/wiki/Network_Configuration#_linux_bond
 
Code:
auto lo
iface lo inet loopback

iface enp1s0f0 inet manual

iface enp1s0f1 inet manual

iface ens6f0 inet manual

iface ens6f1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 172.16.0.31/23
        gateway 172.16.0.1
        bridge-ports enp1s0f0 enp1s0f1
        bridge-stp off
        bridge-fd 0

you have two bridge ports defined with the same IP - probably you want to have a bond setup instead of that. [0]

my guess is that the bridge config with 2 ports is causing connectivity issues.

[0]: https://pve.proxmox.com/wiki/Network_Configuration#_linux_bond
Maybe I'm missing something. But I see one bridge configured with the 172.16.0.31/23 address. and that is also what is showing int he GUI. Here is a picture of it.

bridge.png
 
yes you have one bridge but multiple ports: bridge-ports enp1s0f0 enp1s0f1

this can cause network loops - usually better to use a bonding setup.
 
You are correct. I don't know why I was thinking the bridge was the same as a bond. But I have configured a BOND and put my interfaces into it. But the Container still fails to start. Not sure what to look at now.
 
could you post your new network configuration details and a fresh debug log from the failing container?
 
lxc.log
Code:
root@kvm2:~# cat /tmp/lxc.log
lxc-start 103 20201112160507.510 INFO     confile - confile.c:set_config_idmaps:2051 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 103 20201112160507.510 INFO     confile - confile.c:set_config_idmaps:2051 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 103 20201112160507.511 INFO     lxccontainer - lxccontainer.c:do_lxcapi_start:972 - Set process title to [lxc monitor] /var/lib/lxc 103
lxc-start 103 20201112160507.511 DEBUG    lxccontainer - lxccontainer.c:wait_on_daemonized_start:830 - First child 9475 exited
lxc-start 103 20201112160507.512 INFO     lsm - lsm/lsm.c:lsm_init:29 - LSM security driver AppArmor
lxc-start 103 20201112160507.512 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "103", config section "lxc"
lxc-start 103 20201112160552.349 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 103 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: can't read superblock on /dev/loop0.

lxc-start 103 20201112160552.370 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 103 lxc pre-start produced output: command 'mount /dev/loop0 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

lxc-start 103 20201112160552.383 ERROR    conf - conf.c:run_buffer:323 - Script exited with status 255
lxc-start 103 20201112160552.383 ERROR    start - start.c:lxc_init:804 - Failed to run lxc.hook.pre-start for container "103"
lxc-start 103 20201112160552.383 ERROR    start - start.c:__lxc_start:1903 - Failed to initialize container "103"
lxc-start 103 20201112160552.383 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "103", config section "lxc"
lxc-start 103 20201112160552.886 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "103", config section "lxc"
lxc-start 103 20201112160553.611 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 103 lxc post-stop produced output: umount: /var/lib/lxc/103/rootfs: not mounted

lxc-start 103 20201112160553.611 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 103 lxc post-stop produced output: command 'umount --recursive -- /var/lib/lxc/103/rootfs' failed: exit code 1

lxc-start 103 20201112160553.622 ERROR    conf - conf.c:run_buffer:323 - Script exited with status 1
lxc-start 103 20201112160553.622 ERROR    start - start.c:lxc_end:971 - Failed to run lxc.hook.post-stop for container "103"
lxc-start 103 20201112160553.622 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:843 - No such file or directory - Failed to receive the container state
lxc-start 103 20201112160553.622 ERROR    lxc_start - tools/lxc_start.c:main:308 - The container failed to start
lxc-start 103 20201112160553.622 ERROR    lxc_start - tools/lxc_start.c:main:311 - To get more details, run the container in foreground mode
lxc-start 103 20201112160553.622 ERROR    lxc_start - tools/lxc_start.c:main:314 - Additional information can be obtained by setting the --logfile and --logpriority options
root@kvm2:~#

/etc/network/interfaces
Code:
root@kvm2:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp1s0f0 inet manual

iface enp1s0f1 inet manual

iface ens6f0 inet manual

iface ens6f1 inet manual

auto bond0
iface bond0 inet static
        address 172.16.0.31/23
        gateway 172.16.0.1
        bond-slaves enp1s0f1 enp1s0f0
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

iface vmbr0 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0

root@kvm2:~#

ip a
Code:
root@kvm2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:72 brd ff:ff:ff:ff:ff:ff
3: enp1s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:72 brd ff:ff:ff:ff:ff:ff
4: ens6f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 70:ca:9b:03:ad:b2 brd ff:ff:ff:ff:ff:ff
5: ens6f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 70:ca:9b:03:ad:b4 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 30:f7:0d:99:c2:72 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.31/23 brd 172.16.1.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::32f7:dff:fe99:c272/64 scope link
       valid_lft forever preferred_lft forever
root@kvm2:~#

ip r
Code:
root@kvm2:~# ip r
default via 172.16.0.1 dev bond0 onlink
172.16.0.0/23 dev bond0 proto kernel scope link src 172.16.0.31
root@kvm2:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!