[SOLVED] Resized container MP, now can't boot.

Colonal

Active Member
Aug 21, 2018
11
1
43
I have a debian container (199) that I needed to increase from ~1TB to ~25TB.
(Note: the name of the /dev/x14TB/ is named because it's a raid card of 14TB drives, not a single 14TB drive)

I shut down the container and used the web gui "resize disk" button and I got this error.
Code:
  Size of logical volume x14TB/vm-199-disk-0 changed from 1000.00 GiB (256000 extents) to 24.41 TiB (6400000 extents).
  Logical volume x14TB/vm-199-disk-0 successfully resized.
e2fsck 1.44.5 (15-Dec-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/x14TB/vm-199-disk-0: 101117/65536000 files (1.2% non-contiguous), 160323221/262144000 blocks
resize2fs 1.44.5 (15-Dec-2018)
resize2fs: MMP: invalid magic number while trying to resize /dev/x14TB/vm-199-disk-0
Please run 'e2fsck -fy /dev/x14TB/vm-199-disk-0' to fix the filesystem
after the aborted resize operation.
Resizing the filesystem on /dev/x14TB/vm-199-disk-0 to 6553600000 (4k) blocks.
Failed to update the container's filesystem: command 'resize2fs /dev/x14TB/vm-199-disk-0' failed: exit code 1

TASK OK

It ended with Task ok and I thought it had worked at first, but when attempting to start the container, I get this error.
Code:
run_buffer: 314 Script exited with status 32
lxc_init: 798 Failed to run lxc.hook.pre-start for container "199"
__lxc_start: 1945 Failed to initialize container "199"
TASK ERROR: startup for container '199' failed

I went back to the resize and saw it had failed and gave me a command to run to fix it, so I tried that and got this error.
Code:
root@prox:~# e2fsck -fy /dev/x14TB/vm-199-disk-0
e2fsck 1.44.5 (15-Dec-2018)
e2fsck: No such file or directory while trying to open /dev/x14TB/vm-199-disk-0
Possibly non-existent device?

I know the storage still exists, because I can see it in the online gui for the storage device, and it shows the new increased size.
I have another container with a disk on the same drive (102), so I started comparing what I could find different.
I then ran lvdisplay and saw that it shows not available
Code:
  --- Logical volume ---
  LV Path                /dev/x14TB/vm-102-disk-0
  LV Name                vm-102-disk-0
  VG Name                x14TB
  LV UUID                VMeCiZ-cYjs-RekP-0FVt-P3gf-297s-kRNGcW
  LV Write Access        read/write
  LV Creation host, time prox, 2022-01-20 14:50:22 -0500
  LV Status              available
  # open                 1
  LV Size                19.53 TiB
  Current LE             5120000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
 
  --- Logical volume ---
  LV Path                /dev/x14TB/vm-199-disk-0
  LV Name                vm-199-disk-0
  VG Name                x14TB
  LV UUID                Z4tVsj-lYxY-pf04-lsa1-4rsf-jn88-W3QS70
  LV Write Access        read/write
  LV Creation host, time prox, 2022-07-28 16:29:53 -0400
  LV Status              NOT available
  LV Size                24.41 TiB
  Current LE             6400000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

I ran fdisk -l ls /dev/mapper and ls /dev/x14TB I can see an entry for 102s disk but not 199s disk in each.

I've searched through the forum and found two similar posts but neither had a resolution posted. I'm currently at a loss for my next step. Looking for advise on how to recover this disk. I do have a backup of all the data but it would take a while to rebuild.
 
Last edited:
I went back to the resize and saw it had failed and gave me a command to run to fix it, so I tried that and got this error.
Code:
root@prox:~# e2fsck -fy /dev/x14TB/vm-199-disk-0
e2fsck 1.44.5 (15-Dec-2018)
e2fsck: No such file or directory while trying to open /dev/x14TB/vm-199-disk-0
Possibly non-existent device?
You might need to activate the LV first. Maybe try lvchange -a y vm-199-disk-0 before running the fsck?
 
  • Like
Reactions: Colonal
You might need to activate the LV first. Maybe try lvchange -a y vm-199-disk-0 before running the fsck?
OK so activating it first did let me run the e2fsck command.

It looks like it ran successfuly and ended with this:
Code:
/dev/x14TB/vm-199-disk-0: ***** FILE SYSTEM WAS MODIFIED *****
/dev/x14TB/vm-199-disk-0: 101138/1638400000 files (1.2% non-contiguous), 259070804/6553600000 blocks

I checked that it was still active after that was done, then I tried starting the CT, and it failed to start again. Here's a debug log.
Code:
root@prox:/etc/pve/lxc# pct start 199 --debug
run_buffer: 314 Script exited with status 32
lxc_init: 798 Failed to run lxc.hook.pre-start for container "199"
__lxc_start: 1945 Failed to initialize container "199"
type g nsid 0 hostid 100000 range 65536
INFO     lsm - lsm/lsm.c:lsm_init:40 - Initialized LSM security driver AppArmor
INFO     conf - conf.c:run_script_argv:331 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "199", config section "lxc"
DEBUG    conf - conf.c:run_buffer:303 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 199 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/mp0: wrong fs type, bad option, bad superblock on /dev/mapper/x14TB-vm--199--disk--0, missing codepage or helper program, or other error.

DEBUG    conf - conf.c:run_buffer:303 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 199 lxc pre-start produced output: command 'mount /dev/dm-1 /var/lib/lxc/.pve-staged-mounts/mp0' failed: exit code 32

ERROR    conf - conf.c:run_buffer:314 - Script exited with status 32
ERROR    start - start.c:lxc_init:798 - Failed to run lxc.hook.pre-start for container "199"
ERROR    start - start.c:__lxc_start:1945 - Failed to initialize container "199"
INFO     conf - conf.c:run_script_argv:331 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "199", config section "lxc"
startup for container '199' failed

So I activated it again, and tried the command provided by @fabian and it just gives an error about the root disk, no mention of this mount point
Code:
root@prox:/dev/x14TB# pct fsck 199
unable to run fsck for 'NVME:subvol-199-disk-0' (format == subvol)

With it still active, I ran resize2fs again, and this is what I got
Code:
root@prox:/etc/pve/lxc# resize2fs /dev/x14TB/vm-199-disk-0
resize2fs 1.44.5 (15-Dec-2018)
resize2fs: MMP: invalid magic number while trying to open /dev/x14TB/vm-199-disk-0
Couldn't find valid filesystem superblock.

Edit: I've been looking this up and I've seen a couple things posted that the resize2fs command isn't the correct command to use on LVMs. Though I'm not sure if that's the case because the resize tool used it. Just something I wanted to confirm I guess?
 
Last edited:
the bad news - it seems the corruption caused by the bug is not recoverable. we are currently in the process of testing a backport of the involved packages with the fix (to avoid future corruptions). fixed packages are also already available in bullseye-backports (e2fsprogs in version 1.46-5-2~bpo11+2)
 
available on pvetest now as well
 
Thanks for getting those details back to me. I definitely need to do some updates to get this fix

I did find a way to recover it buried in another post online, gonna post it here for anyone in the future.
I made a clone of the container, and the new clone works.
Not ideal because you need to have double the storage available, but it does create everything correctly on the new CT.
 
  • Like
Reactions: fabian

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!