LXC Container disk too large. Cannot Start

Raymond Burns

Member
Apr 2, 2013
333
1
18
Houston, Texas, United States
I received this error during a resize of LXC Mount Point:
Code:
Task viewer: resize

OutputStatus

Stop
2017-04-10 07:56:27.325140 7f1dd1994780 -1 did not load config file, using default settings.
Resizing image: 100% complete...done.
e2fsck 1.42.12 (29-Aug-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/rbd/Ceph3/vm-118-disk-4: 585/503316480 files (54.0% non-contiguous), 3961800993/4026531840 blocks
resize2fs 1.42.12 (29-Aug-2014)
resize2fs: New size too large to be expressed in 32 bits

Failed to update the container's filesystem: command 'resize2fs /dev/rbd/Ceph3/vm-118-disk-4' failed: exit code 1

TASK OK

The final size of the Mount Point is:
Code:
#Bacula Backup
arch: amd64
cpulimit: 4
cpuunits: 1024
hostname: ####Bacula
memory: 8192
[B]mp1: Ceph3:vm-118-disk-4,mp=/mnt/pct2schedule-disk-4,size=25370G[/B]
mp2: Ceph3:vm-118-disk-6,mp=/mnt/monthly-disk-6,size=9092G
mp3: Ceph3:vm-118-disk-5,mp=/mnt/annual-disk-5,size=5000G
net0: name=eth0,bridge=vmbr0,gw=10.255.86.1,hwaddr=FA:C0:F8:41:15:A3,ip=10.255.86.64/24,ip6=dhcp,type=veth
onboot: 0
ostype: centos
parent: bacula_install
rootfs: Ceph3:vm-118-disk-3,size=30G
swap: 1024
Code:
# pveversion
pve-manager/4.4-12/e71b7a74 (running kernel: 4.4.40-1-pve)
"version": "ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)"

Code:
# ceph -s
    cluster ccf2ef03-0a0f-42dd-a553-7dd4e102b7fc
     health HEALTH_WARN
            crush map has legacy tunables (require bobtail, min is firefly)
            all OSDs are running jewel or later but the 'require_jewel_osds' osdmap flag is not set
     monmap e23: 4 mons at {0=10.255.87.24:6789/0,1=10.255.87.27:6789/0,5=10.255.87.25:6789/0,6=10.255.87.26:6789/0}
            election epoch 9922, quorum 0,1,2,3 0,5,6,1
     osdmap e159655: 136 osds: 134 up, 134 in
      pgmap v31661361: 2048 pgs, 1 pools, 42261 GB data, 10579 kobjects
            123 TB used, 248 TB / 372 TB avail
                2048 active+clean
  client io 107 kB/s rd, 705 kB/s wr, 5 op/s rd, 51 op/s wr
 
Looks like you're going past the 16TB limit. Enabling 64 bit sizes for ext4 seems to require a newer e2fsprogs (>=1.43). You could try installing it from jessie-backports and run `resize2fs -b
/dev/rbd/Ceph3/vm-118-disk-4
` manually (but I recommend making a snapshot or backup first, such operations always come with some risk).
 
Yes. I did not know about the limit.
This Mount Point has important DATA on it.
Is there any way to shrink it, or split it some way. I have 15TB of data on it.
I'll give your solution a try for now.

Also, is there a way to access my RAW data without the VM it is attached to? It should still be in my CEPH Pool even if the risk turn out bad.
 
From the GUI
Code:
2017-04-11 07:11:59.424242 7f57c0aba780 -1 did not load config file, using default settings.
Resizing image: 100% complete...done.
e2fsck 1.43.3 (04-Sep-2016)
e2fsck: MMP: fsck being run while checking MMP block
MMP check failed: If you are sure the filesystem is not in use on any node, run:
'tune2fs -f -E clear_mmp {device}'

MMP error info: last update: Mon Apr 10 08:01:55 2017
node: pct2-prox-d device: /dev/rbd/Ceph3/vm-118-disk-4

/dev/rbd/Ceph3/vm-118-disk-4: ********** WARNING: Filesystem still has errors **********

Failed to update the container's filesystem: command 'e2fsck -f -y /dev/rbd/Ceph3/vm-118-disk-4' failed: exit code 12

Then from command line
Code:
# resize2fs -b /dev/rbd/Ceph3/vm-118-disk-4
resize2fs 1.43.3 (04-Sep-2016)
resize2fs: MMP: fsck being run while trying to open /dev/rbd/Ceph3/vm-118-disk-4
Couldn't find valid filesystem superblock.
 
Also, is there a way to access my RAW data without the VM it is attached to? It should still be in my CEPH Pool even if the risk turn out bad.

These commands operate directly on the file system the container is using, so no, this will not work. Such operations should always be done with care, and important data has to be backed up one way or another, a snapshot should probably be enough, make sure you can access the snapshot before proceeding though. (`rbd map` can map snapshot devices which you should then be able to read-only mount)

Note that the container should be off when you do this (it'll refuse to perform these operations on an active file system unless you use force flags). You should be able to use `rbd -p Ceph3 map vm-118-disk-4` to get the /dev/rbd/... device when the container is off in case the device is not there anymore - you can also directly mount that device somewhere afterwards to access the data.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!