Issue with LXC disk resize on PVE 5.2-9

Underphil

New Member
May 22, 2015
13
0
1
Hi all.

Having an issue here that's likely Debian, rather than Proxmox but figured I'd post here. I tried to resize a mountpoint on a Ubuntu 16.04LTS LXC container disk a couple of days back (through the interface). Resized the volume on the Nimble which backs it, ran 'pvresize /dev/sdx' to resize the physical volume, and all was good in the listing of lsblk and vgdisplay at this point. Then doing an offline resize in the interface, I get this :

Code:
  Size of logical volume OPS_LVM_VG/vm-109-disk-2 changed from 8.79 TiB (2304000 extents) to 18.55 TiB (4864000 extents).
  Logical volume OPS_LVM_VG/vm-109-disk-2 successfully resized.
e2fsck 1.43.4 (31-Jan-2017)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/OPS_LVM_VG/vm-109-disk-2: 12/294912000 files (0.0% non-contiguous), 19500704/2359296000 blocks
resize2fs 1.43.4 (31-Jan-2017)
resize2fs: MMP: invalid magic number while trying to resize /dev/OPS_LVM_VG/vm-109-disk-2
Please run 'e2fsck -fy /dev/OPS_LVM_VG/vm-109-disk-2' to fix the filesystem
after the aborted resize operation.
Resizing the filesystem on /dev/OPS_LVM_VG/vm-109-disk-2 to 4980736000 (4k) blocks.
Failed to update the container's filesystem: command 'resize2fs /dev/OPS_LVM_VG/vm-109-disk-2' failed: exit code 1

TASK OK

Since then, I have created a number of new LXC containers for testing, and any resize results in the same issue (and a damaged image that won't mount). Same happens running the commands manually, of course.

Can anyone shed some light over what might be causing this? Bad version of resize2fs maybe?

//Edit : Just before anyone asks, if I run the e2fsck repair, it complains about the invalid magic number, claims to fix it, and then just rinse and repeat.
 
Curious. It's supposed to complain with a meaningful error message that you're resizing it over a maximum of 16TiB if I see this correctly. (A 32 bit count of blocks of 4k = 16TiB).
The fact that it fails like that means we need to add a check. Also a way to have containers formatted with the 64bit flag set at create/mkfs time.


EDIT:
That might not be it. Testing this here manually worked without issues, so it might fixup the flag after all.
So the issue must be with wherever that MMP error comes from?
If the container is online you shouldn't see the fsck output. If it's offline you shouldn't get the MMP error message.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!