haldarritam

New Member
Jan 19, 2024
9
1
3
Hello,
Ran into a problem with my Proxmox VE. My LXC container, which is my go-to NAS, just won’t boot up after a reboot. Here’s the error it’s throwing at me:

Code:
run_buffer: 322 Script exited with status 32
lxc_init: 844 Failed to run lxc.hook.pre-start for container "100"
__lxc_start: 2027 Failed to initialize container "100"
TASK ERROR: startup for container '100' failed

Upon inspection, I noticed that the storage is completely utilized, which may have led to potential corruption, although this is not certain. To address this issue, I am open to deleting and recreating the container if necessary; however, preserving the data is importance.

local-hdd.jpg

Here’s the container config:

Code:
root@pve0:~# cat /etc/pve/lxc/100.conf
arch: amd64
cores: 1
features: nesting=1
hostname: deb-nas-hdd
lock: mounted
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:70:DA:A0,ip=dhcp,type=veth
onboot: 1
ostype: debian
parent: vzdump
rootfs: local-hdd:vm-100-disk-0,size=928G
startup: order=1
swap: 2048
unprivileged: 1

[vzdump]
#vzdump backup snapshot
arch: amd64
cores: 1
features: nesting=1
hostname: deb-nas-hdd
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:70:DA:A0,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: local-hdd:vm-100-disk-0,size=928G
snapstate: delete
snaptime: 1708236029
startup: order=1
swap: 2048
unprivileged: 1

Any tips on how to get this sorted without saying bye to my data would be super appreciated.
 
Last edited:
Thanks for replying!
This is the output log -

Code:
root@pve0:~# cat /tmp/lxc.log
lxc-start 100 20240406175044.924 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 100 20240406175044.924 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 100 20240406175044.925 INFO     lxccontainer - ../src/lxc/lxccontainer.c:do_lxcapi_start:998 - Set process title to [lxc monitor] /var/lib/lxc 100
lxc-start 100 20240406175044.925 DEBUG    lxccontainer - ../src/lxc/lxccontainer.c:wait_on_daemonized_start:859 - First child 40559 exited
lxc-start 100 20240406175044.925 INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
lxc-start 100 20240406175044.925 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "100", config section "lxc"
lxc-start 100 20240406175045.544 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: /dev/mapper/local--hdd-vm--100--disk--0 already mounted or mount point busy.
       dmesg(1) may have more information after failed mount system call.

lxc-start 100 20240406175045.545 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 100 lxc pre-start produced output: command 'mount /dev/dm-14 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

lxc-start 100 20240406175045.558 ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 32
lxc-start 100 20240406175045.559 ERROR    start - ../src/lxc/start.c:lxc_init:844 - Failed to run lxc.hook.pre-start for container "100"
lxc-start 100 20240406175045.559 ERROR    start - ../src/lxc/start.c:__lxc_start:2027 - Failed to initialize container "100"
lxc-start 100 20240406175045.559 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "100", config section "lxc"
lxc-start 100 20240406175046.613 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "100", config section "lxc"
lxc-start 100 20240406175046.540 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 100 lxc post-stop produced output: umount: /var/lib/lxc/100/rootfs: not mounted

lxc-start 100 20240406175046.540 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 100 lxc post-stop produced output: command 'umount --recursive -- /var/lib/lxc/100/rootfs' failed: exit code 1

lxc-start 100 20240406175046.553 ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 1
lxc-start 100 20240406175046.553 ERROR    start - ../src/lxc/start.c:lxc_end:985 - Failed to run lxc.hook.post-stop for container "100"
lxc-start 100 20240406175046.553 ERROR    lxccontainer - ../src/lxc/lxccontainer.c:wait_on_daemonized_start:870 - No such file or directory - Failed to receive the container state
lxc-start 100 20240406175046.553 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:306 - The container failed to start
lxc-start 100 20240406175046.553 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:309 - To get more details, run the container in foreground mode
lxc-start 100 20240406175046.553 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:311 - Additional information can be obtained by setting the --logfile and --logpriority options
 
I already tried running that earlier, it did not work -
Code:
root@pve0:~# pct fsck 100
fsck from util-linux 2.38.1
MMP check failed: If you are sure the filesystem is not in use on any node, run:
'tune2fs -f -E clear_mmp /dev/mapper/local--hdd-vm--100--disk--0'
MMP_block:
    mmp_magic: 0x4d4d50
    mmp_check_interval: 7
    mmp_sequence: e24d4d50
    mmp_update_date: Sat Apr  6 12:10:27 2024
fsck.ext4: MMP: e2fsck being run while checking MMP block
    mmp_update_time: 1712419827
    mmp_node_name: pve0
    mmp_device_name: /dev/mapper/local--hdd-vm--100--

/dev/mapper/local--hdd-vm--100--disk--0: ********** WARNING: Filesystem still has errors **********

command 'fsck -a -l /dev/local-hdd/vm-100-disk-0' failed: exit code 12
 
its probable you will not get this container to boot, and the shortest path to recovery is to restore from backup. In any event, I'd suggest taking a closer look at your underlying storage to make sure its functioning properly; file system corruption doesn't occur in a vacuum.

any further active manipulation of the file system has as much potential to cause further damage, so if you are still expecting to recover anything I'd hold off making any changes to the logical volume on the file system in it; so, at this point, the question becomes what, in practice, you'd like to accomplish.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!