Unable to start container turns into unable to recover container

alexgeddylfson

New Member
Feb 21, 2024
2
0
1
Hello. I am running PVE 8.1.4. I upgraded memory in my server and after powering it back on, I had noticed that one of my LVM thins was not showing back up. After a little troubleshooting I modified my "/etc/lvm/lvm.conf" to add "thin_check_options = [ "-q", "--skip-mappings" ]" This had brought my LV back, but one specific container would not start. After piddling with a few other possible solutions, it no longer exists as a container, nor will it allow me to recover it from my PBS. I have tried all sorts of ways to recover it, even using different storage methods, and different ct numbers. Gonna sort of trauma dump all the issues I faced in order or my memory to see if there is any hope besides rebuilding some containers.

TASK ERROR: activating LV 'NASDUMP/NASDUMP' failed: Activation of logical volume NASDUMP/NASDUMP is prohibited while logical volume NASDUMP/NASDUMP_tmeta is active.

Which was solved by this
lvchange -an NASDUMP/NASDUMP_tmeta
lvchange -an NASDUMP/NASDUMP_tdata
lvchange -ay NASDUMP/NASDUMP


This is the error the container was giving me when attempting to start it
run_buffer: 322 Script exited with status 32lxc_init: 844 Failed to run lxc.hook.pre-start for container "101"__lxc_start: 2027 Failed to initialize container "101"TASK ERROR: startup for container '101' failed

and this is currently where I am when trying to recover. I have plenty of space on the target LV. Not sure where to look next.

recovering backed-up configuration from 'Backup:backup/ct/101/2024-02-19T06:00:00Z'
Logical volume "vm-101-disk-0" created.
Creating filesystem with 783548416 4k blocks and 195887104 inodes
Filesystem UUID: 8a1f9276-776e-492f-a6be-41d2f06032c3
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
mount: /var/lib/lxc/101/rootfs: wrong fs type, bad option, bad superblock on /dev/mapper/NASDUMP-vm--101--disk--0, missing codepage or helper program, or other error.
dmesg(1) may have more information after failed mount system call.
mounting container failed
Logical volume "vm-101-disk-0" successfully removed.
TASK ERROR: unable to restore CT 101 - command 'mount -o noacl /dev/dm-28 /var/lib/lxc/101/rootfs//' failed: exit code 32
 
This looks like this bug: https://bugzilla.proxmox.com/show_bug.cgi?id=4846

TASK ERROR: unable to restore CT 101 - command 'mount -o noacl /dev/dm-28 /var/lib/lxc/101/rootfs//' failed: exit code 32
This error occurs because of the outdated noacl mount option. This can be avoided by removing the option from the container config for now until the bug is fixed.

First locate and extract the container backup. Here is an example for a .tar.zstd backup file on local storage:
Code:
zstd -d /var/lib/vz/dump/vzdump-lxc-105-2024_02_21-09_28_50.tar.zst
mkdir /tmp/vzdump105
tar -xf /var/lib/vz/dump/vzdump-lxc-105-2024_02_21-09_28_50.tar -C /tmp/vzdump105
rm /var/lib/vz/dump/vzdump-lxc-105-2024_02_21-09_28_50.tar

Now edit the container config in /tmp/vzdump105/etc/vzdump/pct.conf and remove acl=0 from the rootfs line.
This will not enable ACLs since they are already disabled by default.

After that the backup can be archived and compressed again:
Code:
cd /tmp/vzdump105
tar -cf vzdump-lxc-105-2024_02_21-09_28_50.tar .
zstd -z vzdump-lxc-105-2024_02_21-09_28_50.tar
mv vzdump-lxc-105-2024_02_21-09_28_50.tar.zst /var/lib/vz/dump/vzdump-lxc-105-2024_02_21-09_28_50.tar.zst

Now the container should be able to be restored
Code:
pct restore 105 local:backup/vzdump-lxc-105-2024_02_21-09_28_50.tar.zst

EDIT: There is a much easier way.
Just overwrite acl=0 by passing the --rootfs option to pct restore
Code:
pct restore 101 Backup:backup/ct/101/2024-02-19T06:00:00Z --rootfs local:8
local:8 means that the root file system should be placed on an 8GB disk on the local storage. Adjust this to your needs.

This restores the container from the backup while overwriting the rootfs line in the container config.
 
Last edited:
This looks like this bug: https://bugzilla.proxmox.com/show_bug.cgi?id=4846


This error occurs because of the outdated noacl mount option. This can be avoided by removing the option from the container config for now until the bug is fixed.

First locate and extract the container backup. Here is an example for a .tar.zstd backup file on local storage:
Code:
zstd -d /var/lib/vz/dump/vzdump-lxc-105-2024_02_21-09_28_50.tar.zst
mkdir /tmp/vzdump105
tar -xf /var/lib/vz/dump/vzdump-lxc-105-2024_02_21-09_28_50.tar -C /tmp/vzdump105
rm /var/lib/vz/dump/vzdump-lxc-105-2024_02_21-09_28_50.tar

Now edit the container config in /tmp/vzdump105/etc/vzdump/pct.conf and remove acl=0 from the rootfs line.
This will not enable ACLs since they are already disabled by default.

After that the backup can be archived and compressed again:
Code:
cd /tmp/vzdump105
tar -cf vzdump-lxc-105-2024_02_21-09_28_50.tar .
zstd -z vzdump-lxc-105-2024_02_21-09_28_50.tar
mv vzdump-lxc-105-2024_02_21-09_28_50.tar.zst /var/lib/vz/dump/vzdump-lxc-105-2024_02_21-09_28_50.tar.zst

Now the container should be able to be restored
Code:
pct restore 105 local:backup/vzdump-lxc-105-2024_02_21-09_28_50.tar.zst

EDIT: There is a much easier way.
Just overwrite acl=0 by passing the --rootfs option to pct restore
Code:
pct restore 101 Backup:backup/ct/101/2024-02-19T06:00:00Z --rootfs local:8
local:8 means that the root file system should be placed on an 8GB disk on the local storage. Adjust this to your needs.

This restores the container from the backup while overwriting the rootfs line in the container config.
Thank you so very much! Currently restoring! I will report back when it is hopefully back up and running. So far, way further progress than before.

I had seen references to that bug in posts I had looked at, but none with the solution to overwrite that rootfs line. Very interesting.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!