[SOLVED] Can't start CT after expanding one of its logiacal volumes

maeries

Renowned Member
Jul 10, 2015
28
2
68
This first paragraph will explain what I just did. It might nor be relevant, but I don't want any information to be missing. So I have a CT with a Root Disk and 3 mount points. I wanted to merge the volume groups of the drives behind mp1 and mp3. So I deleted the logical volume of one of the volume groups and merged them. I then expanded the logical volume using
Code:
lvresize -L +3724G Filme/vm-101-disk-1
with "Filme" being the VG and "vm-101-disk-1" being the LV. Now, in the webinterface Datecenter->pve->Filme, it shows the new, bigger size. The mount point still has the old size, though, and using the "resize disk" button just tells me, that there is no space left. I'm fairly sure that at this point the CT still started. I then found this article and ran
Code:
resize2fs /dev/mapper/Filme-vm--101--disk--1
which resulted in

Code:
root@pve:~# resize2fs /dev/mapper/Filme-vm--101--disk--1
resize2fs 1.43.4 (31-Jan-2017)
Please run 'e2fsck -f /dev/mapper/Filme-vm--101--disk--1' first.

root@pve:~# e2fsck -f /dev/mapper/Filme-vm--101--disk--1
e2fsck 1.43.4 (31-Jan-2017)
e2fsck: MMP: fsck being run while checking MMP block
MMP check failed: If you are sure the filesystem is not in use on any node, run:
'tune2fs -f -E clear_mmp {device}'

MMP error info: last update: Tue Jan 30 23:25:05 2018
 node: pve device: /dev/mapper/Filme-vm--101--disk-

/dev/mapper/Filme-vm--101--disk--1: ********** WARNING: Filesystem still has errors **********

Now, every time I want to start the CT, i get this
Code:
root@pve:~# systemctl start pve-container@101
Job for pve-container@101.service failed because the control process exited with error code.
See "systemctl status pve-container@101.service" and "journalctl -xe" for details.
root@pve:~# systemctl status pve-container@101.service
● pve-container@101.service - PVE LXC Container: 101
   Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Tue 2018-01-30 23:56:39 CET; 12s ago
     Docs: man:lxc-start
           man:lxc
           man:pct
  Process: 2165 ExecStart=/usr/bin/lxc-start -n 101 (code=exited, status=1/FAILURE)

Jan 30 23:56:37 pve systemd[1]: Starting PVE LXC Container: 101...
Jan 30 23:56:39 pve lxc-start[2165]: lxc-start: 101: lxccontainer.c: wait_on_daemonized_start: 751 No such file or dir
Jan 30 23:56:39 pve lxc-start[2165]: lxc-start: 101: tools/lxc_start.c: main: 371 The container failed to start.
Jan 30 23:56:39 pve lxc-start[2165]: lxc-start: 101: tools/lxc_start.c: main: 373 To get more details, run the contain
Jan 30 23:56:39 pve lxc-start[2165]: lxc-start: 101: tools/lxc_start.c: main: 375 Additional information can be obtain
Jan 30 23:56:39 pve systemd[1]: pve-container@101.service: Control process exited, code=exited status=1
Jan 30 23:56:39 pve systemd[1]: Failed to start PVE LXC Container: 101.
Jan 30 23:56:39 pve systemd[1]: pve-container@101.service: Unit entered failed state.
Jan 30 23:56:39 pve systemd[1]: pve-container@101.service: Failed with result 'exit-code'.

What I don't understand is that the CT does not boot from the drives i have been tinkering around with. I guess a solution is to remove the mount point (does that erase data?), but I would like to repair the volumes, if possible.

Thanks.
 
After a healthy amount of sleep I was able to find the quite obvious answer. First, I guess the right way to address the LV ist via /dev/VG/LV and not /dev/mapper/LV. I also ran
Code:
root@pve:~# tune2fs -f -E clear_mmp {device}
tune2fs 1.43.4 (31-Jan-2017)
tune2fs: No such file or directory while trying to open {device}
Couldn't find valid filesystem superblock.
root@pve:~# tune2fs -f -E clear_mmp /dev/Filme/vm-101-disk-1
tune2fs 1.43.4 (31-Jan-2017)
root@pve:~# e2fsck -f /dev/Filme/vm-101-disk-1
e2fsck 1.43.4 (31-Jan-2017)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/Filme/vm-101-disk-1: 1754/244187136 files (6.2% non-contiguous), 918059726/976748544 blocks
root@pve:~# lvresize -l +100%FREE Filme/vm-101-disk-1
  Size of logical volume Filme/vm-101-disk-1 changed from 7.28 TiB (1907200 extents) to 7.28 TiB (1907722 extents).
  Logical volume Filme/vm-101-disk-1 successfully resized.
root@pve:~# resize2fs /dev/Filme/vm-101-disk-1
resize2fs 1.43.4 (31-Jan-2017)
Resizing the filesystem on /dev/Filme/vm-101-disk-1 to 1953507328 (4k) blocks.
The filesystem on /dev/Filme/vm-101-disk-1 is now 1953507328 (4k) blocks long.
Now the only problem remaining is that the Proxmox webinterface still shows the old 4tb, while running df in the CT shows the right size (8tb). Guess I'll just ignore that
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!