LXC fails to start backup fails to restore

prad81

New Member
Sep 9, 2023
6
1
3
I used the tteck script to get a Trillium installed and running and it was working well, and I started to receive io error on a VM which I have managed to resolve. However, I have encountered an error with this container where it fails to start . In tasks history I can see an error stating "Error:startup for container '116' failed" and the contents are

run_buffer: 322 Script exited with status 25
lxc_init: 844 Failed to run lxc.hook.pre-start for container "116"
__lxc_start: 2027 Failed to initialize container "116"
TASK ERROR: startup for container '116' failed

In the console it states "lxc-console: 116: ../src/lxc/tools/lxc_console.c: main: 129 116 is not running"

Does anyone know how to resolve this ?

Trying to restore from backup I'm receiving :
recovering backed-up configuration from 'local:backup/vzdump-lxc-116-2023_09_10-02_41_34.tar.zst'
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
Logical volume "vm-116-disk-1" created.
WARNING: Sum of all thin volume sizes (192.00 GiB) exceeds the size of thin pool pve/data and the amount of free space in volume group (16.00 GiB).
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: 730a0ed1-cb69-4491-851e-0f40aebe4733
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Logical volume "vm-116-disk-0" successfully removed
restoring 'local:backup/vzdump-lxc-116-2023_09_10-02_41_34.tar.zst' now..
extracting archive '/var/lib/vz/dump/vzdump-lxc-116-2023_09_10-02_41_34.tar.zst'
Total bytes read: 983664640 (939MiB, 270MiB/s)
Detected container architecture: amd64
merging backed-up and given configuration..
Logical volume "vm-116-disk-1" successfully removed
TASK ERROR: unable to restore CT 116 - unsupported debian version '12.1'

Which is indicating that I've run out of space from what I'm seeing, but df-h on the node its saying I should have plenty of space :
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 783M 1.6M 781M 1% /run
/dev/mapper/pve-root 68G 6.3G 59G 10% /
tmpfs 3.9G 66M 3.8G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda2 1022M 344K 1022M 1% /boot/efi
/dev/fuse 128M 36K 128M 1% /etc/pve


After trying to resolve, I no longer have access to 116 apart from the backup thats on local-lvm.
 
Hi my pveversion -v output is :
Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-3
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.6.3
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20221111-1
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

and my pvecontainer is 4.4-3. I haven't run an update, I was going to build out a similar system and do a trial upgrade in the coming weeks to make sure everything goes smoothly but I'm still on 7 without any updates.
 
Ahh right thanks neobin, would you know why it would have become out of sync. I haven't updated the container nor proxmox, and I was the application in the container the previous day.
 
Maybe an auto/unattended update? Otherwise I have no clue, sorry.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!