Unable to start most LXC containers after rebooting (run_buffer 322)

julian-pe

New Member
Sep 23, 2022
8
0
1
Dear all,

I have 10 different LXC containers running. After a system reboot, only container 100 and 101 will start - all others failed with each the same error message:
Code:
run_buffer: 322 Script exited with status 32
lxc_init: 844 Failed to run lxc.hook.pre-start for container "102"
__lxc_start: 2027 Failed to initialize container "102"
TASK ERROR: startup for container '102' failed

For further debugging:

Code:
root@pve:~# lxc-start -n 102 -F -lDEBUG -o lxc-102.log
lxc-start 102 20230403111028.711 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 102 20230403111028.711 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 102 20230403111028.712 INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
lxc-start 102 20230403111028.712 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "102", config section "lxc"

Code:
root@pve:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec)
pve-kernel-helper: 7.3-4
pve-kernel-5.15: 7.2-14
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-1
lxcfs: 5.0.3-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.5.5
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-3
pve-ha-manager: 3.5.1
pve-i18n: 2.8-2
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Code:
root@pve:~# pct config 102
arch: amd64
cores: 2
description: # Unifi LXC%0A### https://tteck.github.io/Proxmox/%0A<a href='https://ko-fi.com/D1D7EP4GF'><img src='https://img.shields.io/badge/%E2%98%95-Buy me a coffee-red' /></a>%0A
features: nesting=1,keyctl=1
hostname: unifi
memory: 2048
net0: name=eth0,bridge=vmbr0,hwaddr=AE:10:48:32:5D:A0,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-102-disk-0,size=8G
swap: 512
unprivileged: 1


Code:
root@pve:~# cat  /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images
Code:
root@pve:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  3.8G     0  3.8G   0% /dev
tmpfs                 772M  1.3M  771M   1% /run
/dev/mapper/pve-root   55G  8.9G   43G  18% /
tmpfs                 3.8G   46M  3.8G   2% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
/dev/sda2             511M  328K  511M   1% /boot/efi
/dev/fuse             128M   20K  128M   1% /etc/pve
tmpfs                 772M     0  772M   0% /run/user/0

Code:
root@pve:~# pct mount 102
mount: /var/lib/lxc/102/rootfs: wrong fs type, bad option, bad superblock on /dev/mapper/pve-vm--102--disk--0, missing codepage or helper program, or other error.
mounting container failed
command 'mount /dev/dm-8 /var/lib/lxc/102/rootfs//' failed: exit code 32

Code:
root@pve:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 223.6G  0 disk 
├─sda1                         8:1    0  1007K  0 part 
├─sda2                         8:2    0   512M  0 part /boot/efi
└─sda3                         8:3    0 223.1G  0 part 
  ├─pve-swap                 253:0    0     7G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0  55.8G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   1.4G  0 lvm  
  │ └─pve-data-tpool         253:4    0 141.4G  0 lvm  
  │   ├─pve-data             253:5    0 141.4G  1 lvm  
  │   ├─pve-vm--100--disk--0 253:6    0     4G  0 lvm  
  │   ├─pve-vm--101--disk--0 253:7    0     2G  0 lvm  
  │   ├─pve-vm--102--disk--0 253:8    0     8G  0 lvm  
  │   ├─pve-vm--104--disk--0 253:9    0     8G  0 lvm  
  │   ├─pve-vm--105--disk--0 253:10   0    10G  0 lvm  
  │   ├─pve-vm--106--disk--0 253:11   0     8G  0 lvm  
  │   ├─pve-vm--107--disk--0 253:12   0     8G  0 lvm  
  │   ├─pve-vm--109--disk--0 253:13   0     4G  0 lvm  
  │   ├─pve-vm--103--disk--0 253:14   0     8G  0 lvm  
  │   ├─pve-vm--108--disk--0 253:15   0     8G  0 lvm  
  │   ├─pve-vm--110--disk--0 253:16   0     2G  0 lvm  
  │   └─pve-vm--111--disk--0 253:17   0     2G  0 lvm  
  └─pve-data_tdata           253:3    0 141.4G  0 lvm  
    └─pve-data-tpool         253:4    0 141.4G  0 lvm  
      ├─pve-data             253:5    0 141.4G  1 lvm  
      ├─pve-vm--100--disk--0 253:6    0     4G  0 lvm  
      ├─pve-vm--101--disk--0 253:7    0     2G  0 lvm  
      ├─pve-vm--102--disk--0 253:8    0     8G  0 lvm  
      ├─pve-vm--104--disk--0 253:9    0     8G  0 lvm  
      ├─pve-vm--105--disk--0 253:10   0    10G  0 lvm  
      ├─pve-vm--106--disk--0 253:11   0     8G  0 lvm  
      ├─pve-vm--107--disk--0 253:12   0     8G  0 lvm  
      ├─pve-vm--109--disk--0 253:13   0     4G  0 lvm  
      ├─pve-vm--103--disk--0 253:14   0     8G  0 lvm  
      ├─pve-vm--108--disk--0 253:15   0     8G  0 lvm  
      ├─pve-vm--110--disk--0 253:16   0     2G  0 lvm  
      └─pve-vm--111--disk--0 253:17   0     2G  0 lvm

Please help me - which steps i need to take - to start all my lxcs again.

Thank you very much in advance
 
Last edited:
To begin, I would suggest updating Proxmox to its latest version, which is currently 7.4-3.
 
I also have this issue when upgrading to optional 6.2 kernel. After downgrade back to 5.15 or 5.19 all LXCs starts again.

But i have also observed this behavior with PVE-7.3 and 7.2 with the optional 6.1 kernel.

Code:
root@pve2:~# pct start 111 --debug
run_buffer: 322 Script exited with status 32
lxc_init: 844 Failed to run lxc.hook.pre-start for container "111"
__lxc_start: 2027 Failed to initialize container "111"
d 0 hostid 100000 range 65536
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "111", config section "lxc"
DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 111 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: wrong fs type, bad option, bad superblock on /dev/mapper/pve-vm--111--disk--0, missing codepage or helper program, or other error.

DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 111 lxc pre-start produced output: command 'mount -o noacl /dev/dm-9 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 32
ERROR    start - ../src/lxc/start.c:lxc_init:844 - Failed to run lxc.hook.pre-start for container "111"
ERROR    start - ../src/lxc/start.c:__lxc_start:2027 - Failed to initialize container "111"
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "111", config section "lxc"
startup for container '111' failed

PS: The lxc container is a proxmox-mail-gateway instance.

Code:
root@pve2:~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 6.2.9-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-1
pve-kernel-5.15.104-1-pve: 5.15.104-1
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Using LVM
 
Last edited:
  • Like
Reactions: julian-pe
I also have this issue when upgrading to optional 6.2 kernel. After downgrade back to 5.15 or 5.19 all LXCs starts again.

But i have also observed this behavior with PVE-7.3 and 7.2 with the optional 6.1 kernel.

Code:
root@pve2:~# pct start 111 --debug
run_buffer: 322 Script exited with status 32
lxc_init: 844 Failed to run lxc.hook.pre-start for container "111"
__lxc_start: 2027 Failed to initialize container "111"
d 0 hostid 100000 range 65536
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "111", config section "lxc"
DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 111 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: wrong fs type, bad option, bad superblock on /dev/mapper/pve-vm--111--disk--0, missing codepage or helper program, or other error.

DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 111 lxc pre-start produced output: command 'mount -o noacl /dev/dm-9 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 32
ERROR    start - ../src/lxc/start.c:lxc_init:844 - Failed to run lxc.hook.pre-start for container "111"
ERROR    start - ../src/lxc/start.c:__lxc_start:2027 - Failed to initialize container "111"
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "111", config section "lxc"
startup for container '111' failed

PS: The lxc container is a proxmox-mail-gateway instance.

Code:
root@pve2:~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 6.2.9-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-1
pve-kernel-5.15.104-1-pve: 5.15.104-1
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Using LVM

How can I manually downgrade to the last working kernel?
 
How can I manually downgrade to the last working kernel?
You should uninstall pve-kernel-6.2: e.g. apt-get purge pve-kernel-6.2*, then simply reboot and a uname -a should display something like that:

Linux my-pve-host 5.19.17-2-pve #1 SMP PREEMPT_DYNAMIC PVE 5.19.17-2 (Sat, 28 Jan 2023 16:40:25 x86_64 GNU/Linux

Code:
root@my-pve-host:~# apt-get purge pve-kernel-6.2*
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'pve-kernel-6.2.9-1-pve' for glob 'pve-kernel-6.2*'
Note, selecting 'pve-kernel-6.2.6-1-pve' for glob 'pve-kernel-6.2*'
Note, selecting 'pve-kernel-6.2.2-1-pve' for glob 'pve-kernel-6.2*'
Note, selecting 'pve-kernel-6.2' for glob 'pve-kernel-6.2*'
Package 'pve-kernel-6.2.2-1-pve' is not installed, so not removed
The following packages will be REMOVED:
  pve-kernel-6.2* pve-kernel-6.2.6-1-pve* pve-kernel-6.2.9-1-pve*
0 upgraded, 0 newly installed, 3 to remove and 0 not upgraded.
After this operation, 975 MB disk space will be freed.
Do you want to continue? [Y/n]
 
You should uninstall pve-kernel-6.2: e.g. apt-get purge pve-kernel-6.2*, then simply reboot and a uname -a should display something like that:

Linux my-pve-host 5.19.17-2-pve #1 SMP PREEMPT_DYNAMIC PVE 5.19.17-2 (Sat, 28 Jan 2023 16:40:25 x86_64 GNU/Linux

Code:
root@my-pve-host:~# apt-get purge pve-kernel-6.2*
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'pve-kernel-6.2.9-1-pve' for glob 'pve-kernel-6.2*'
Note, selecting 'pve-kernel-6.2.6-1-pve' for glob 'pve-kernel-6.2*'
Note, selecting 'pve-kernel-6.2.2-1-pve' for glob 'pve-kernel-6.2*'
Note, selecting 'pve-kernel-6.2' for glob 'pve-kernel-6.2*'
Package 'pve-kernel-6.2.2-1-pve' is not installed, so not removed
The following packages will be REMOVED:
  pve-kernel-6.2* pve-kernel-6.2.6-1-pve* pve-kernel-6.2.9-1-pve*
0 upgraded, 0 newly installed, 3 to remove and 0 not upgraded.
After this operation, 975 MB disk space will be freed.
Do you want to continue? [Y/n]
Unfortunately, I'm actually on Kernel V5.15

Code:
root@pve:~# uname -a
Linux pve 5.15.74-1-pve #1 SMP PVE 5.15.74-1 (Mon, 14 Nov 2022 20:17:15 +0100) x86_64 GNU/Linux

So that was unluckily not the reason for my lxc errors
 
To begin, I would suggest updating Proxmox to its latest version, which is currently 7.4-3.

What are the reasons why the problem occurs on a particular version of proxmox and on another version (sometimes earlier) it works?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!