Container failed to start because of mount exit code 32

Elephant6692

New Member
Oct 13, 2023
2
0
1
I resized my container while it was running and did an upgrade, not sure what caused this but my container doesn't want to start anymore

Bash:
lxc-start -n 113 -F -l DEBUG -o /tmp/lxc-113.log
Code:
lxc-start 113 20231013124833.327 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 113 20231013124833.327 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 113 20231013124833.328 INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
lxc-start 113 20231013124833.328 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "113", config section "lxc"
lxc-start 113 20231013124835.417 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 113 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: can't read superblock on /dev/mapper/pve-vm--113--disk--0.
       dmesg(1) may have more information after failed mount system call.

lxc-start 113 20231013124835.418 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 113 lxc pre-start produced output: command 'mount /dev/dm-8 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

lxc-start 113 20231013124835.439 ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 32
lxc-start 113 20231013124835.439 ERROR    start - ../src/lxc/start.c:lxc_init:844 - Failed to run lxc.hook.pre-start for container "113"
lxc-start 113 20231013124835.439 ERROR    start - ../src/lxc/start.c:__lxc_start:2027 - Failed to initialize container "113"
lxc-start 113 20231013124835.439 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "113", config section "lxc"
lxc-start 113 20231013124836.452 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 113 lxc post-stop produced output: umount: /var/lib/lxc/113/rootfs: not mounted

lxc-start 113 20231013124836.456 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 113 lxc post-stop produced output: command 'umount --recursive -- /var/lib/lxc/113/rootfs' failed: exit code 1

lxc-start 113 20231013124836.475 ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 1
lxc-start 113 20231013124836.475 ERROR    start - ../src/lxc/start.c:lxc_end:985 - Failed to run lxc.hook.post-stop for container "113"
lxc-start 113 20231013124836.475 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:306 - The container failed to start
lxc-start 113 20231013124836.478 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:311 - Additional information can be obtained by setting the --logfile and --logpriority options

Bash:
lsblk -f
Code:
NAME                         FSTYPE   FSVER     LABEL        UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
sr0                                                                                                            
sr1                          iso9660  Joliet Ex vz-tools-lin 2022-12-13-18-25-23-00                            
vda                                                                                                            
├─vda1                                                                                                          
├─vda2                       vfat     FAT32                  25B8-ADE9                                          
└─vda3                       LVM2_mem LVM2 001               2YFrje-VRzX-YmbU-LwVM-Uxc3-dcLn-SVCXF0            
  ├─pve-swap                 swap     1                      ab0bbda7-b3c0-4774-a93e-f124db0eab1f                  [SWAP]
  ├─pve-root                 ext4     1.0                    132f4ab7-3609-48af-aef9-3b437c00a0d2     24.9G    31% /
  ├─pve-data_tmeta                                                                                              
  │ └─pve-data-tpool                                                                                            
  │   ├─pve-data                                                                                                
  │   ├─pve-vm--111--disk--0 ext4     1.0                    d1fde649-0653-4a8e-8b7d-b5cbd9d568b0              
  │   ├─pve-vm--112--disk--0 ext4     1.0                    7aeae040-b1c3-40f4-9e32-6808e5afc7d9              
  │   └─pve-vm--113--disk--0 ext4     1.0                    92a88114-35b2-4b7c-a590-07283af7b16a              
  └─pve-data_tdata                                                                                              
    └─pve-data-tpool                                                                                            
      ├─pve-data                                                                                                
      ├─pve-vm--111--disk--0 ext4     1.0                    d1fde649-0653-4a8e-8b7d-b5cbd9d568b0              
      ├─pve-vm--112--disk--0 ext4     1.0                    7aeae040-b1c3-40f4-9e32-6808e5afc7d9              
      └─pve-vm--113--disk--0 ext4     1.0                    92a88114-35b2-4b7c-a590-07283af7b16a

Bash:
 lvs -a
Code:
  LV              VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi-aotzD- 54.68g             100.00 3.32                        
  [data_tdata]    pve Twi-ao---- 54.68g                                                
  [data_tmeta]    pve ewi-ao----  1.00g                                                
  [lvol0_pmspare] pve ewi-------  1.00g                                                
  root            pve -wi-ao---- 39.81g                                                
  swap            pve -wi-ao---- <7.75g                                                
  vm-111-disk-0   pve Vwi-aotz--  3.00g data        57.85                              
  vm-112-disk-0   pve Vwi-aotz--  4.00g data        97.28                              
  vm-113-disk-0   pve Vwi-a-tz-- 70.00g data        70.08

Bash:
vgs -a
Code:
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1   6   0 wz--n- <119.00g 14.75g

Bash:
pct config 113
Code:
arch: amd64
cores: 4
features: fuse=1,nesting=1
hostname: docker
memory: 8192
nameserver: 192.168.222.112
net0: name=eth0,bridge=vmbr1,firewall=1,gw=192.168.222.1,hwaddr=2A:69:BC:73:90:FE,ip=192.168.222.113/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-113-disk-0,size=70G
swap: 2048
unprivileged: 1
lxc.cgroup.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir

Bash:
pct fsck 113
Code:
fsck from util-linux 2.38.1
/dev/mapper/pve-vm--113--disk--0: recovering journal
fsck.ext4: Input/output error while recovering journal of /dev/mapper/pve-vm--113--disk--0
/dev/mapper/pve-vm--113--disk--0 contains a file system with errors, check forced.
/dev/mapper/pve-vm--113--disk--0: Inodes that were part of a corrupted orphan linked list found. 

/dev/mapper/pve-vm--113--disk--0: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
        (i.e., without -a or -p options)
command 'fsck -a -l /dev/pve/vm-113-disk-0' failed: exit code 4
 
Last edited:
I was able to get it running by
Bash:
fsck /dev/pve/vm-113-disk-0
and answering yes to all

But upon logging in to the container (which is an Ubuntu 22.04 LTS) I get this
Code:
Traceback (most recent call last):
  File "/usr/lib/ubuntu-release-upgrader/check-new-release", line 133, in <module>
    m = MetaReleaseCore(useDevelopmentRelease=options.devel_release,
  File "/usr/lib/python3/dist-packages/UpdateManager/Core/MetaRelease.py", line 103, in __init__
    cache = apt.Cache()
  File "/usr/lib/python3/dist-packages/apt/cache.py", line 152, in __init__
    self.open(progress)
  File "/usr/lib/python3/dist-packages/apt/cache.py", line 214, in open
    self._cache = apt_pkg.Cache(progress)
apt_pkg.Error: E:Unable to parse package file /var/lib/apt/extended_states (1)

Bash:
apt update
Code:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Error!
E: Unable to parse package file /var/lib/apt/extended_states (1)

and the filesystem seems to mounted read-only. How can I possibly recover from this?

Upon trying to resize it running
Bash:
e2fsck -f -y /dev/pve/vm-113-disk-0
gets me
Code:
Restarting e2fsck from the beginning...
e2fsck: MMP: e2fsck being run while trying to open /dev/pve/vm-113-disk-0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Now I'm stuck with the same problem that the container doesn't want to start anymore
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!