Superblock LVM problem container boot

Areyuel

New Member
Nov 11, 2023
1
0
1
Hi all,

New proxmox user here (few months of use, love it)

I think I broke something due to my little knowledge of how storage works.

I had a container running linux with a docker setup and it was running out of space (vm-101-disk-0)

Based on .bash_history this are the commands I used:

Bash:
lvdisplay
lvextend -L +50G /dev/pve/vm-101-disk-0
pvdisplay
lvdisplay
lvresize --extents +100%FREE --resizefs /dev/pve/vm-101-disk-0
lvextend -L -50G /dev/pve/vm-101-disk-0
lvreduce
lvreduce -L -50G /dev/pve/vm-101-disk-0
lvresize --extents +100%FREE --resizefs /dev/pve/vm-101-disk-0
parted -l
fdisk -l
apt install parted
parted -l
parted /dev/nvme0n1
vgdisplay
clear
vgdisplay
pvdisplay
lvdisplay
lvscan
exit
lxc-start -n 101 -F -lDEBUG -o lxc-101.log

I screwed the container because now it does not start anymore.

Code:
mount: /var/lib/lxc/.pve-staged-mounts/rootfs: wrong fs type, bad option, bad superblock on /dev/mapper/pve-vm--101--disk--0


The filesystem size (according to the superblock) is 35649536 blocks
The physical size of the device is 26734592 blocks
Either the superblock or the partition table is likely to be corrupt!

lsblk and fdisk show 102G
lvs show 70G


This is all the info I've gathered up till now:

Code:
root@pve:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1   4   0 wz--n- <222.57g 15.99g
  
-----------------------------------------------------------------------------
root@pve:~# pvs
  PV             VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1p3 pve lvm2 a--  <222.57g 15.99g
  
-----------------------------------------------------------------------------
root@pve:~# lvs
  WARNING: Cannot find matching thin segment for pve/vm-101-disk-0.
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- <130.27g             16.65  1.76
  root          pve -wi-ao----   65.64g
  swap          pve -wi-ao----    8.00g
  vm-101-disk-0 pve Vwi-XXtzX-   70.00g data
  
-----------------------------------------------------------------------------
root@pve:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir

dir: Store1
        path /mnt/Store1
        content rootdir,backup,vztmpl,images,snippets,iso
        prune-backups keep-all=1
        shared 0

-----------------------------------------------------------------------------
root@pve:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   1.8T  0 disk
├─sda1                         8:1    0 931.5G  0 part /mnt/Store1
├─sda2                         8:2    0 465.8G  0 part
└─sda3                         8:3    0 465.8G  0 part
sr0                           11:0    1  1024M  0 rom
nvme0n1                      259:0    0 223.6G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0 222.6G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0  65.6G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   1.3G  0 lvm
  │ └─pve-data-tpool         253:4    0 130.3G  0 lvm
  │   ├─pve-data             253:5    0 130.3G  1 lvm
  │   └─pve-vm--101--disk--0 253:6    0   102G  0 lvm
  └─pve-data_tdata           253:3    0 130.3G  0 lvm
    └─pve-data-tpool         253:4    0 130.3G  0 lvm
      ├─pve-data             253:5    0 130.3G  1 lvm
      └─pve-vm--101--disk--0 253:6    0   102G  0 lvm

  
-----------------------------------------------------------------------------
root@pve:~# cat lxc-101.log
lxc-start 101 20231111101914.136 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 101 20231111101914.136 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 101 20231111101914.138 INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
lxc-start 101 20231111101914.138 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "101", config section "lxc"
lxc-start 101 20231111101914.336 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: wrong fs type, bad option, bad superblock on /dev/mapper/pve-vm--101--disk--0, missing codepage or helper program, or other error.

lxc-start 101 20231111101914.336 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: command 'mount /dev/dm-6 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

lxc-start 101 20231111101914.344 ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 32
lxc-start 101 20231111101914.344 ERROR    start - ../src/lxc/start.c:lxc_init:844 - Failed to run lxc.hook.pre-start for container "101"
lxc-start 101 20231111101914.344 ERROR    start - ../src/lxc/start.c:__lxc_start:2027 - Failed to initialize container "101"
lxc-start 101 20231111101914.344 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "101", config section "lxc"
lxc-start 101 20231111101914.845 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "101", config section "lxc"
lxc-start 101 20231111101915.134 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: umount: /var/lib/lxc/101/rootfs: not mounted

lxc-start 101 20231111101915.134 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: command 'umount --recursive -- /var/lib/lxc/101/rootfs' failed: exit code 1

lxc-start 101 20231111101915.141 ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 1
lxc-start 101 20231111101915.141 ERROR    start - ../src/lxc/start.c:lxc_end:985 - Failed to run lxc.hook.post-stop for container "101"
lxc-start 101 20231111101915.141 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:306 - The container failed to start
lxc-start 101 20231111101915.141 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:311 - Additional information can be obtained by setting the --logfile and --logpriority options

-----------------------------------------------------------------------------
root@pve:~# pct mount 101
  WARNING: Cannot find matching thin segment for pve/vm-101-disk-0.
mount: /var/lib/lxc/101/rootfs: wrong fs type, bad option, bad superblock on /dev/mapper/pve-vm--101--disk--0, missing codepage or helper program, or other error.
mounting container failed
command 'mount /dev/dm-6 /var/lib/lxc/101/rootfs//' failed: exit code 32

-----------------------------------------------------------------------------
root@pve:~# fdisk -l
Disk /dev/nvme0n1: 223.57 GiB, 240057409536 bytes, 468862128 sectors
Disk model: Force MP500
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: EB2EDE31-BDBE-4A23-9C06-4AF9CC9AA000

Device           Start       End   Sectors   Size Type
/dev/nvme0n1p1      34      2047      2014  1007K BIOS boot
/dev/nvme0n1p2    2048   2099199   2097152     1G EFI System
/dev/nvme0n1p3 2099200 468862094 466762895 222.6G Linux LVM


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 65.64 GiB, 70485278720 bytes, 137666560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-vm--101--disk--0: 101.98 GiB, 109504888832 bytes, 213876736 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM001-1ER1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: E0030EE9-AA96-4758-A4CC-BB6D68D67CC3

Device          Start        End    Sectors   Size Type
/dev/sda1        2048 1953515519 1953513472 931.5G Linux filesystem
/dev/sda2  1953515520 2930272255  976756736 465.8G Microsoft basic data
/dev/sda3  2930272256 3907028991  976756736 465.8G Microsoft basic data

-----------------------------------------------------------------------------
root@pve:~# parted -l
Model: ATA ST2000DM001-1ER1 (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name                  Flags
 1      1049kB  1000GB  1000GB  ext4
 2      1000GB  1500GB  500GB   ntfs         Basic data partition  msftdata
 3      1500GB  2000GB  500GB   ntfs         Basic data partition  msftdata


Model: Linux device-mapper (thin) (dm)
Disk /dev/mapper/pve-vm--101--disk--0: 110GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End    Size   File system  Flags
 1      0.00B  110GB  110GB  ext4


Error: /dev/mapper/pve-data: unrecognised disk label
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/pve-data: 140GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Error: /dev/mapper/pve-data-tpool: unrecognised disk label
Model: Linux device-mapper (thin-pool) (dm)
Disk /dev/mapper/pve-data-tpool: 140GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/pve-swap: 8590MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  8590MB  8590MB  linux-swap(v1)


Error: /dev/mapper/pve-data_tdata: unrecognised disk label
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/pve-data_tdata: 140GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/pve-root: 70.5GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  70.5GB  70.5GB  ext4


Error: /dev/mapper/pve-data_tmeta: unrecognised disk label
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/pve-data_tmeta: 1430MB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Model: Force MP500 (nvme)
Disk /dev/nvme0n1: 240GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  1049kB  1031kB                     bios_grub
 2      1049kB  1075MB  1074MB  fat32              boot, esp
 3      1075MB  240GB   239GB                      lvm



-----------------------------------------------------------------------------
root@pve:~# pct fsck 101
fsck from util-linux 2.36.1
/dev/mapper/pve-vm--101--disk--0: The filesystem size (according to the superblock) is 35649536 blocks
The physical size of the device is 26734592 blocks
Either the superblock or the partition table is likely to be corrupt!


/dev/mapper/pve-vm--101--disk--0: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
        (i.e., without -a or -p options)
command 'fsck -a -l /dev/pve/vm-101-disk-0' failed: exit code 4

-----------------------------------------------------------------------------
root@pve:~# /sbin/resize2fs -f /dev/pve/vm-101-disk-0
resize2fs 1.46.5 (30-Dec-2021)
Resizing the filesystem on /dev/pve/vm-101-disk-0 to 26734592 (4k) blocks.
/sbin/resize2fs: Block bitmap checksum does not match bitmap while trying to resize /dev/pve/vm-101-disk-0
Please run 'e2fsck -fy /dev/pve/vm-101-disk-0' to fix the filesystem
after the aborted resize operation.

I've tried using all superblock backups with
Code:
mkfs.ext4 -n /dev/pve/vm-101-disk-0
and then
Code:
fsck -b 32768 /dev/pve/vm-101-disk-0
fsck -b 98304 /dev/pve/vm-101-disk-0
...

all of them with the same result:
Code:
fsck from util-linux 2.36.1
e2fsck 1.46.5 (30-Dec-2021)
/dev/mapper/pve-vm--101--disk--0: recovering journal
fsck.ext4: unable to set superblock flags on /dev/mapper/pve-vm--101--disk--0


/dev/mapper/pve-vm--101--disk--0: ***** FILE SYSTEM WAS MODIFIED *****

/dev/mapper/pve-vm--101--disk--0: ********** WARNING: Filesystem still has errors **********

I've tried with:
Code:
root@pve:~# vgcfgrestore -l pve

...
File:         /etc/lvm/archive/pve_00002-1833956430.vg
  VG name:      pve
  Description:  Created *before* executing 'lvextend -L +50G /dev/pve/vm-101-disk-0'
  Backup Time:  Fri Nov 10 13:59:58 2023
...
  
root@pve:~# vgcfgrestore -f /etc/lvm/archive/pve_00003-1710421572.vg pve --force

...no luck.


I've read a few posts in this forum and tried some other commands but none of them worked. I have attached the whole .bash_history just in case.



I know I can redo all I had in the docker containers in a few weeks but I would really like to resolve this to learn (and to avoid redoing it all)

Any ideas would be great. I'll be monitoring this post and try to answer any questions ASAP.

Thanks in advance.

PD: Sorry about the wall of text but I do not know what is really relevant, so I put it all.
PD2: English is not my first language.
 

Attachments

  • bash_history.txt
    3.2 KB · Views: 1

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!