This first paragraph will explain what I just did. It might nor be relevant, but I don't want any information to be missing. So I have a CT with a Root Disk and 3 mount points. I wanted to merge the volume groups of the drives behind mp1 and mp3. So I deleted the logical volume of one of the volume groups and merged them. I then expanded the logical volume using
with "Filme" being the VG and "vm-101-disk-1" being the LV. Now, in the webinterface Datecenter->pve->Filme, it shows the new, bigger size. The mount point still has the old size, though, and using the "resize disk" button just tells me, that there is no space left. I'm fairly sure that at this point the CT still started. I then found this article and ran
which resulted in
Now, every time I want to start the CT, i get this
What I don't understand is that the CT does not boot from the drives i have been tinkering around with. I guess a solution is to remove the mount point (does that erase data?), but I would like to repair the volumes, if possible.
Thanks.
Code:
lvresize -L +3724G Filme/vm-101-disk-1
Code:
resize2fs /dev/mapper/Filme-vm--101--disk--1
Code:
root@pve:~# resize2fs /dev/mapper/Filme-vm--101--disk--1
resize2fs 1.43.4 (31-Jan-2017)
Please run 'e2fsck -f /dev/mapper/Filme-vm--101--disk--1' first.
root@pve:~# e2fsck -f /dev/mapper/Filme-vm--101--disk--1
e2fsck 1.43.4 (31-Jan-2017)
e2fsck: MMP: fsck being run while checking MMP block
MMP check failed: If you are sure the filesystem is not in use on any node, run:
'tune2fs -f -E clear_mmp {device}'
MMP error info: last update: Tue Jan 30 23:25:05 2018
node: pve device: /dev/mapper/Filme-vm--101--disk-
/dev/mapper/Filme-vm--101--disk--1: ********** WARNING: Filesystem still has errors **********
Now, every time I want to start the CT, i get this
Code:
root@pve:~# systemctl start pve-container@101
Job for pve-container@101.service failed because the control process exited with error code.
See "systemctl status pve-container@101.service" and "journalctl -xe" for details.
root@pve:~# systemctl status pve-container@101.service
● pve-container@101.service - PVE LXC Container: 101
Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2018-01-30 23:56:39 CET; 12s ago
Docs: man:lxc-start
man:lxc
man:pct
Process: 2165 ExecStart=/usr/bin/lxc-start -n 101 (code=exited, status=1/FAILURE)
Jan 30 23:56:37 pve systemd[1]: Starting PVE LXC Container: 101...
Jan 30 23:56:39 pve lxc-start[2165]: lxc-start: 101: lxccontainer.c: wait_on_daemonized_start: 751 No such file or dir
Jan 30 23:56:39 pve lxc-start[2165]: lxc-start: 101: tools/lxc_start.c: main: 371 The container failed to start.
Jan 30 23:56:39 pve lxc-start[2165]: lxc-start: 101: tools/lxc_start.c: main: 373 To get more details, run the contain
Jan 30 23:56:39 pve lxc-start[2165]: lxc-start: 101: tools/lxc_start.c: main: 375 Additional information can be obtain
Jan 30 23:56:39 pve systemd[1]: pve-container@101.service: Control process exited, code=exited status=1
Jan 30 23:56:39 pve systemd[1]: Failed to start PVE LXC Container: 101.
Jan 30 23:56:39 pve systemd[1]: pve-container@101.service: Unit entered failed state.
Jan 30 23:56:39 pve systemd[1]: pve-container@101.service: Failed with result 'exit-code'.
What I don't understand is that the CT does not boot from the drives i have been tinkering around with. I guess a solution is to remove the mount point (does that erase data?), but I would like to repair the volumes, if possible.
Thanks.