PVE LXC Container won't boot up.

ExeSoler

New Member
Feb 24, 2023
7
1
1
Hi guys! I have a nextcloud in one of my containers, but since 2 days ago, this container won't start. I am a little noob in all of this and from Argentina, my English is rusty, so be patient.

My tasklog
run_buffer: 322 Script exited with status 255
lxc_init: 844 Failed to run lxc.hook.pre-start for container "101"
__lxc_start: 2027 Failed to initialize container "101"
TASK ERROR: startup for container '101' failed

My syslog
Feb 24 10:31:41 pve systemd[1]: Started PVE LXC Container: 101.
Feb 24 10:31:41 pve kernel: [ 308.245061] loop0: detected capacity change from 0 to 1363148800
Feb 24 10:31:41 pve kernel: [ 308.309930] EXT4-fs warning (device loop0): read_mmp_block:106: Error -74 while reading MMP block 9337
Feb 24 10:31:41 pve pvedaemon[1823]: startup for container '101' failed
Feb 24 10:31:41 pve pvedaemon[971]: <root@pam> end task UPID:pve:0000071F:00007826:63F8BC3C:vzstart:101:root@pam: startup for container '101' failed
Feb 24 10:31:42 pve systemd[1]: pve-container@101.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 10:31:42 pve systemd[1]: pve-container@101.service: Failed with result 'exit-code'.
Feb 24 10:31:42 pve systemd[1]: pve-container@101.service: Consumed 1.189s CPU time.

I think it is a corrupt block on the HDD and if I'm right, how can I fix it?
Thanks!
 
Hi,

Did you see the debug log for the CT 101? if no - you can run this lxc-start -n 101 -F -l DEBUG -o /tmp/lxc-CT101.log command and the /tmp/lxc-CT101.log file.

I think it is a corrupt block on the HDD and if I'm right, how can I fix it?
Does other CT on the same storage start without any issues?

Can you also post the CT config as well (pct config 101)?
 
Hi,

Did you see the debug log for the CT 101? if no - you can run this lxc-start -n 101 -F -l DEBUG -o /tmp/lxc-CT101.log command and the /tmp/lxc-CT101.log file.


Does other CT on the same storage start without any issues?

Can you also post the CT config as well (pct config 101)?

Hi! I run the command and return this
lxc-start: 101: ../src/lxc/conf.c: run_buffer: 322 Script exited with status 255
lxc-start: 101: ../src/lxc/start.c: lxc_init: 844 Failed to run lxc.hook.pre-start for container "101"
lxc-start: 101: ../src/lxc/start.c: __lxc_start: 2027 Failed to initialize container "101"
lxc-start: 101: ../src/lxc/conf.c: run_buffer: 322 Script exited with status 1
lxc-start: 101: ../src/lxc/start.c: lxc_end: 985 Failed to run lxc.hook.post-stop for container "101"
lxc-start: 101: ../src/lxc/tools/lxc_start.c: main: 306 The container failed to start
lxc-start: 101: ../src/lxc/tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options

I have only one more CT with tomcat and work fine.
The pct return this
arch: amd64
cores: 4
features: nesting=1
hostname: nextcloud
memory: 4096
net0: name=eth0,bridge=vmbr0,gw=192.168.100.1,hwaddr=D6:D0:2C:DA:2F:23,ip=192.168.100.200/24,ip6=dhcp,rate=15,type=veth
onboot: 1
ostype: debian
rootfs: local1TB:101/vm-101-disk-0.raw,size=650G
swap: 4096
unprivileged: 1
unused0: local:101/vm-101-disk-0.raw
 
I forgot to send the log (/tmp/lxc-CT101.log)

lxc-start 101 20230228175730.349 INFO confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 101 20230228175730.349 INFO confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 101 20230228175730.350 INFO lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
lxc-start 101 20230228175730.350 INFO conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "101", config section "lxc"
lxc-start 101 20230228175731.419 DEBUG conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

lxc-start 101 20230228175731.940 DEBUG conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: command 'mount /dev/loop0 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

lxc-start 101 20230228175731.108 ERROR conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 255
lxc-start 101 20230228175731.108 ERROR start - ../src/lxc/start.c:lxc_init:844 - Failed to run lxc.hook.pre-start for container "101"
lxc-start 101 20230228175731.108 ERROR start - ../src/lxc/start.c:__lxc_start:2027 - Failed to initialize container "101"
lxc-start 101 20230228175731.108 INFO conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "101", config section "lxc"
lxc-start 101 20230228175731.610 INFO conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "101", config section "lxc"
lxc-start 101 20230228175732.199 DEBUG conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: umount: /var/lib/lxc/101/rootfs: not mounted

lxc-start 101 20230228175732.199 DEBUG conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: command 'umount --recursive -- /var/lib/lxc/101/rootfs' failed: exit code 1

lxc-start 101 20230228175732.212 ERROR conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 1
lxc-start 101 20230228175732.212 ERROR start - ../src/lxc/start.c:lxc_end:985 - Failed to run lxc.hook.post-stop for container "101"
lxc-start 101 20230228175732.212 ERROR lxc_start - ../src/lxc/tools/lxc_start.c:main:306 - The container failed to start
lxc-start 101 20230228175732.212 ERROR lxc_start - ../src/lxc/tools/lxc_start.c:main:311 - Additional information can be obtained by setting the --logfile and --logpriority options
 
Hi Moayad. I read the log and search for a possible solution and find a thread in this forum from a guy with the same code "wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error."

So, I try the solution on that thread. Run the code pct fsck 101 and my CT is up and running again. But, I get a connection refused error now.
I will try to fix this by my own, so I think you could close this thread.

You were a great help, Moayad. Thx!
 
Hi,
what is the underlying storage of the container's disk, i.e. local1TB? If it is CIFS, you might be facing this bug: https://bugzilla.proxmox.com/show_bug.cgi?id=4499 and it will probably happen again after the next time the container is stopped and require doing an fsck again. If it is not CIFS and the issue happens again, please tell us, because that means the bug might be more widespread.
 
Hi Fiona. Yeah, the CT it's on a SSD, and the storage of the nextcloud is in the local1TB HDD.

But, I did reboot a few times the CT and hard reset the server. I don't see any problem at all, it just runs all right. My whole family was able to log in with their phones and laptops. At first sight, no data was lost.
 
Hi Fiona. Yeah, the CT it's on a SSD, and the storage of the nextcloud is in the local1TB HDD.
Sorry, I meant what kind of storage is that, i.e. what does pvesm status show for its type?

But, I did reboot a few times the CT and hard reset the server. I don't see any problem at all, it just runs all right. My whole family was able to log in with their phones and laptops. At first sight, no data was lost.
Happy to hear :) So it likely was a different issue.
 
Did it say something like "restored superblock from backup"?
Yep, something like that. It finds an error in a superblock and in some sectors, and fixed automatically.
Unfortunately, I didn't take a screenshot.

Sorry. :<
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!