[SOLVED] Error initializing my promox containers

jmcordero

New Member
May 10, 2023
8
1
3
Hi, a few days ago an electrical failure caused my promox to shut down abruptly, and from that moment on none of the containers that I have created are initialized.
When trying to initialize them, I get the following error

Error: startup for container "104" faid (the number of the container is not necessary since the same thing happens to me with all of them)

When I go to the log it shows me this:
Code:
run_buffer: 316 Script exited with status 32
lxc_init: 816 Failed to run lxc.hook.pre-start for container "104"
__lxc_start: 2007 Failed to initialize container "104"
TASK ERROR: startup for container '104' failed

My promox version is 7.0-11
I'm new to this and I hope you can help me, thanks in advance and stay tuned
 
Last edited:
Hi,
please post the output of pct start 104 --debug and the journal since reboot journalctl -b > journal.txt.

Also,I would suggest to upgrade to the latest PVE version 7.4

Edit: Also see https://forum.proxmox.com/threads/p...t-fail-failed-to-initialize-container.114699/
Code:
run_buffer: 316 Script exited with status 32
lxc_init: 816 Failed to run lxc.hook.pre-start for container "104"
__lxc_start: 2007 Failed to initialize container "104"
pve-prestart-hook" for container "104", config section "lxc"
DEBUG    conf - conf.c:run_buffer:305 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 104 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: can't read superblock on /dev/mapper/pve-vm--104--disk--1.

DEBUG    conf - conf.c:run_buffer:305 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 104 lxc pre-start produced output: command 'mount /dev/dm-8 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

ERROR    conf - conf.c:run_buffer:316 - Script exited with status 32
ERROR    start - start.c:lxc_init:816 - Failed to run lxc.hook.pre-start for container "104"
ERROR    start - start.c:__lxc_start:2007 - Failed to initialize container "104"
INFO     conf - conf.c:run_script_argv:332 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "104", config section "lxc"
startup for container '104' failed

This is the pct start 104 --debug results
 
Last edited:
can't read superblock on /dev/mapper/pve-vm--104--disk--1.
Looks like your container root filesystem got corrupted.

Also, from the journal
Code:
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 0 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 21495962 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 21495818 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 11 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 18874400 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6815760 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 4 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6815986 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6816181 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6824464 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6824250 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7340048 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7340241 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7340032 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7864336 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7864621 in log
May 10 19:04:06 pve3 kernel: JBD2: recovery failed
it seems recovering the filesystem failed.

Can you also post the container config pct config 104?
You might want to try and run a filesystem check on the volume or directly restore from backup.

Do you get the same error for all the other containers as well?
 
Looks like your container root filesystem got corrupted.

Also, from the journal
Code:
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 0 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 21495962 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 21495818 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 11 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 18874400 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6815760 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 4 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6815986 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6816181 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6824464 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6824250 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7340048 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7340241 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7340032 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7864336 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7864621 in log
May 10 19:04:06 pve3 kernel: JBD2: recovery failed
it seems recovering the filesystem failed.

Can you also post the container config pct config 104?
You might want to try and run a filesystem check on the volume or directly restore from backup.

Do you get the same error for all the other containers as well?
yes, all containers throw the same error
Code:
root@pve3:/# pct config 104
arch: amd64
cores: 2
features: nesting=1
hostname: sigenu
memory: 4096
nameserver: 10.60.0.5 10.60.0.12
net0: name=eth0,bridge=vmbr0,gw=10.60.0.254,hwaddr=C2:8E:DA:E2:1B:C6,ip=10.60.0.56/18,ip6=auto,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-104-disk-1,size=100G
searchdomain: uart.edu.cu
swap: 8192
 
Looks like your container root filesystem got corrupted.

Also, from the journal
Code:
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 0 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 21495962 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 21495818 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 11 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 18874400 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6815760 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 4 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6815986 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6816181 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6824464 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 6824250 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7340048 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7340241 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7340032 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7864336 in log
May 10 19:04:06 pve3 kernel: JBD2: Invalid checksum recovering data block 7864621 in log
May 10 19:04:06 pve3 kernel: JBD2: recovery failed
it seems recovering the filesystem failed.

Can you also post the container config pct config 104?
You might want to try and run a filesystem check on the volume or directly restore from backup.

Do you get the same error for all the other containers as well?
I am willing to try anything to restore container 104 since the system that is mounted there is the one that has the database of all the students and workers of my workplace, and unfortunately we do not have any updated backup, the most recent is 2021.
 
I am willing to try anything to restore container 104 since the system that is mounted there is the one that has the database of all the students and workers of my workplace, and unfortunately we do not have any updated backup, the most recent is 2021.
What seems strange to me is that all of the containers should be affected by this. Is the underlying storage okay? What is the output of vgs and lvs?
 
What seems strange to me is that all of the containers should be affected by this. Is the underlying storage okay? What is the output of vgs and lvs?
This is the vgs output:
Code:
root@pve3:/# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  pve   1  17   0 wz--n- <3.64t 576.00m

and this is the lvs output:
Code:
root@pve3:/# lvs
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  base-100-disk-1 pve Vri---tz-k 100.00g data
  base-120-disk-0 pve Vri---tz-k 500.00g data
  data            pve twi-aotz--  <3.49t             3.03   0.53
  data_meta0      pve -wi-a-----  15.81g
  data_meta1      pve -wi-a-----  15.81g
  root            pve -wi-ao----  96.00g
  swap            pve -wi-ao----   8.00g
  vm-101-disk-0   pve Vwi---tz--  58.00g data
  vm-102-disk-0   pve Vwi---tz--  50.00g data
  vm-103-disk-1   pve Vwi-a-tz--  58.00g data        10.75
  vm-104-disk-1   pve Vwi-a-tz-- 100.00g data        93.64
  vm-105-disk-1   pve Vwi-a-tz--  48.00g data        17.66
  vm-107-disk-1   pve Vwi---tz--  48.00g data
  vm-111-disk-1   pve Vwi---tz--  58.00g data
  vm-113-disk-1   pve Vwi---tz-- 100.00g data
  vm-113-disk-2   pve Vwi---tz-- 100.00g data
  vm-121-disk-0   pve Vwi---tz-- 500.00g data
 
Last edited:
Whats the output of lsblk -o +FSTYPE? Try running an fsck on the volume fsck /dev/dm-8
 
Whats the output of lsblk -o +FSTYPE? Try running an fsck on the volume fsck /dev/dm-8
This is the output of lsblk -o +FSTYPE:
Code:
root@pve3:/# lsblk -o +FSTYPE
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT FSTYPE
sda                            8:0    0  3.6T  0 disk
├─sda1                         8:1    0 1007K  0 part
├─sda2                         8:2    0  512M  0 part            vfat
└─sda3                         8:3    0  3.6T  0 part            LVM2_member
  ├─pve-swap                 253:0    0    8G  0 lvm  [SWAP]     swap
  ├─pve-root                 253:1    0   96G  0 lvm  /          ext4
  ├─pve-data_tmeta           253:2    0 15.8G  0 lvm
  │ └─pve-data-tpool         253:4    0  3.5T  0 lvm
  │   ├─pve-data             253:5    0  3.5T  1 lvm
  │   ├─pve-vm--103--disk--1 253:7    0   58G  0 lvm             ext4
  │   ├─pve-vm--104--disk--1 253:8    0  100G  0 lvm             ext4
  │   ├─pve-vm--105--disk--1 253:9    0   48G  0 lvm             ext4
  │   └─pve-vm--110--disk--0 253:11   0   10G  0 lvm             ext4
  ├─pve-data_tdata           253:3    0  3.5T  0 lvm
  │ └─pve-data-tpool         253:4    0  3.5T  0 lvm
  │   ├─pve-data             253:5    0  3.5T  1 lvm
  │   ├─pve-vm--103--disk--1 253:7    0   58G  0 lvm             ext4
  │   ├─pve-vm--104--disk--1 253:8    0  100G  0 lvm             ext4
  │   ├─pve-vm--105--disk--1 253:9    0   48G  0 lvm             ext4
  │   └─pve-vm--110--disk--0 253:11   0   10G  0 lvm             ext4
  ├─pve-data_meta0           253:6    0 15.8G  0 lvm
  └─pve-data_meta1           253:10   0 15.8G  0 lvm
sr0                           11:0    1  1.1G  0 rom             iso9660

When I execute the command fsck /dev/dm-8 it shows me the following, do I give it yes?:
Code:
root@pve3:/# fsck /dev/dm-8
fsck from util-linux 2.36.1
e2fsck 1.46.2 (28-Feb-2021)
/dev/mapper/pve-vm--104--disk--1: recovering journal
JBD2: Invalid checksum recovering data block 0 in log
JBD2: Invalid checksum recovering data block 21495962 in log
JBD2: Invalid checksum recovering data block 21495818 in log
JBD2: Invalid checksum recovering data block 11 in log
JBD2: Invalid checksum recovering data block 18874400 in log
JBD2: Invalid checksum recovering data block 6815760 in log
JBD2: Invalid checksum recovering data block 4 in log
JBD2: Invalid checksum recovering data block 6815986 in log
JBD2: Invalid checksum recovering data block 6816181 in log
JBD2: Invalid checksum recovering data block 6824464 in log
JBD2: Invalid checksum recovering data block 6824250 in log
JBD2: Invalid checksum recovering data block 7340048 in log
JBD2: Invalid checksum recovering data block 7340241 in log
JBD2: Invalid checksum recovering data block 7340032 in log
JBD2: Invalid checksum recovering data block 7864336 in log
JBD2: Invalid checksum recovering data block 7864621 in log
Journal checksum error found in /dev/mapper/pve-vm--104--disk--1
/dev/mapper/pve-vm--104--disk--1 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Deleted inode 12 has zero dtime.  Fix<y>?
 
This is the output of lsblk -o +FSTYPE:
Code:
root@pve3:/# lsblk -o +FSTYPE
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT FSTYPE
sda                            8:0    0  3.6T  0 disk
├─sda1                         8:1    0 1007K  0 part
├─sda2                         8:2    0  512M  0 part            vfat
└─sda3                         8:3    0  3.6T  0 part            LVM2_member
  ├─pve-swap                 253:0    0    8G  0 lvm  [SWAP]     swap
  ├─pve-root                 253:1    0   96G  0 lvm  /          ext4
  ├─pve-data_tmeta           253:2    0 15.8G  0 lvm
  │ └─pve-data-tpool         253:4    0  3.5T  0 lvm
  │   ├─pve-data             253:5    0  3.5T  1 lvm
  │   ├─pve-vm--103--disk--1 253:7    0   58G  0 lvm             ext4
  │   ├─pve-vm--104--disk--1 253:8    0  100G  0 lvm             ext4
  │   ├─pve-vm--105--disk--1 253:9    0   48G  0 lvm             ext4
  │   └─pve-vm--110--disk--0 253:11   0   10G  0 lvm             ext4
  ├─pve-data_tdata           253:3    0  3.5T  0 lvm
  │ └─pve-data-tpool         253:4    0  3.5T  0 lvm
  │   ├─pve-data             253:5    0  3.5T  1 lvm
  │   ├─pve-vm--103--disk--1 253:7    0   58G  0 lvm             ext4
  │   ├─pve-vm--104--disk--1 253:8    0  100G  0 lvm             ext4
  │   ├─pve-vm--105--disk--1 253:9    0   48G  0 lvm             ext4
  │   └─pve-vm--110--disk--0 253:11   0   10G  0 lvm             ext4
  ├─pve-data_meta0           253:6    0 15.8G  0 lvm
  └─pve-data_meta1           253:10   0 15.8G  0 lvm
sr0                           11:0    1  1.1G  0 rom             iso9660

When I execute the command fsck /dev/dm-8 it shows me the following, do I give it yes?:
Code:
root@pve3:/# fsck /dev/dm-8
fsck from util-linux 2.36.1
e2fsck 1.46.2 (28-Feb-2021)
/dev/mapper/pve-vm--104--disk--1: recovering journal
JBD2: Invalid checksum recovering data block 0 in log
JBD2: Invalid checksum recovering data block 21495962 in log
JBD2: Invalid checksum recovering data block 21495818 in log
JBD2: Invalid checksum recovering data block 11 in log
JBD2: Invalid checksum recovering data block 18874400 in log
JBD2: Invalid checksum recovering data block 6815760 in log
JBD2: Invalid checksum recovering data block 4 in log
JBD2: Invalid checksum recovering data block 6815986 in log
JBD2: Invalid checksum recovering data block 6816181 in log
JBD2: Invalid checksum recovering data block 6824464 in log
JBD2: Invalid checksum recovering data block 6824250 in log
JBD2: Invalid checksum recovering data block 7340048 in log
JBD2: Invalid checksum recovering data block 7340241 in log
JBD2: Invalid checksum recovering data block 7340032 in log
JBD2: Invalid checksum recovering data block 7864336 in log
JBD2: Invalid checksum recovering data block 7864621 in log
Journal checksum error found in /dev/mapper/pve-vm--104--disk--1
/dev/mapper/pve-vm--104--disk--1 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Deleted inode 12 has zero dtime.  Fix<y>?
No, before you do that you should create a backup of the disk image first.
You can create e.g. a file based image from the disk using dd if=/dev/mapper/pve-vm--104--disk--1 of=</path/to/output/file.img> bs=4M. Make sure that you have enough disk space on the target location.

Then you can test things without danger of data loss. A copy of the created image can also be mounted as loop device via losetup /dev/loop1 /path/to/output/file-copy.img so you can test things on the image directly rather than the volume.
 
OMG!!!! its work, thanks very much bro, you safe my life!!!!!!!!!!!!!!!!!!!!!!!!!! fsck its the lifeguard XD
Glad it worked! Make sure to have a backup solution in place to not run into such troubles in the future.
 
  • Like
Reactions: jmcordero

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!