LXC Container starten nicht mehr

ChrisNett

New Member
Jan 7, 2025
3
0
1
Hallo Zusammen,

Ich habe eine VE mir drei LCX auf (MotionEye / InfluxDBGrafana / Iobroker ) im Iobroker habe ich ein Bachup gemacht und dann ist der LCX abgeschmiert und starte nun nicht mehr

LXC 976

run_buffer: 322 Script exited with status 32
lxc_init: 844 Failed to run lxc.hook.pre-start for container "976"
__lxc_start: 2027 Failed to initialize container "976"
TASK ERROR: startup for container '976' faile

LXC 101

run_buffer: 322 Script exited with status 32
lxc_init: 844 Failed to run lxc.hook.pre-start for container "101"
__lxc_start: 2027 Failed to initialize container "101"
TASK ERROR: startup for container '101' failed

LXC 102

läuft noch

kann es sein das zu viele Daten aufgelaufen sind ? Das Backup ist 7 Tage nicht gelaufen und dann werden die Lokalen Dateien nicht gelöscht oder ist was an der Platte kaputt ?

Was ich schon ausgeführt habe

lvdisplay

Code:
root@proxmox:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                qaEOI5-EBn0-ocWD-CJVw-fW97-zKrc-bjPnVe
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-04-24 16:24:33 +0200
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
  
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                zzMdfl-KOGx-AqEn-bYRY-7A02-ffXe-JNA4hU
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-04-24 16:24:33 +0200
  LV Status              available
  # open                 1
  LV Size                58.00 GiB
  Current LE             14848
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
  
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                FAQhhl-qasY-QQJP-SF8c-cVp7-arx0-esnLGV
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2022-04-24 16:24:37 +0200
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <147.38 GiB
  Allocated pool data    100.00%
  Allocated metadata     4.70%
  Current LE             37728
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-101-disk-0
  LV Name                vm-101-disk-0
  VG Name                pve
  LV UUID                PG2Kig-MqpP-NV0J-6vfK-8mxX-LlQY-EZNIzj
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-04-25 18:25:38 +0200
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                8.00 GiB
  Mapped size            89.95%
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-102-disk-0
  LV Name                vm-102-disk-0
  VG Name                pve
  LV UUID                5BW2yr-MsE1-dX2s-60E3-62wA-5Js0-R1Kt7c
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-04-26 10:35:29 +0200
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                8.00 GiB
  Mapped size            17.62%
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-976-disk-1
  LV Name                vm-976-disk-1
  VG Name                pve
  LV UUID                82X3Gi-HxnV-bXiX-87s7-rAUn-wPj6-SqVSZ5
  LV Write Access        read/write
  LV Creation host, time proxmox, 2022-05-28 11:47:55 +0200
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                1.03 TiB
  Mapped size            11.15%
  Current LE             271104
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8
  
  --- Logical volume ---
  LV Path                /dev/pve/snap_vm-976-disk-1_ftp
  LV Name                snap_vm-976-disk-1_ftp
  VG Name                pve
  LV UUID                FuUYiY-Ze01-IpFV-hReq-b7A0-KREu-1DjJon
  LV Write Access        read only
  LV Creation host, time proxmox, 2024-01-31 18:02:40 +0100
  LV Pool name           data
  LV Thin origin name    vm-976-disk-1
  LV Status              NOT available
  LV Size                <1.03 TiB
  Current LE             269056
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  
  --- Logical volume ---
  LV Path                /dev/pve/snap_vm-101-disk-0_sicherung
  LV Name                snap_vm-101-disk-0_sicherung
  VG Name                pve
  LV UUID                0G2rni-jRbf-HXFU-XIEe-rNJ5-PCg5-WNNFKM
  LV Write Access        read only
  LV Creation host, time proxmox, 2025-01-06 13:38:39 +0100
  LV Pool name           data
  LV Thin origin name    vm-101-disk-0
  LV Status              NOT available
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
fsck /dev/pve/

Code:
fsck from util-linux 2.36.1
e2fsck 1.46.5 (30-Dec-2021)
fsck.ext2: Is a directory while trying to open /dev/pve

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Was mich etwas wundert ist das der LSC 1TB haben soll es ist aber nur eine 256GB SSD eingebaut ist das das Problem ? Könnte ich in dem LXC ein paar Dateien Löschen ? so das er wieder startet ?

Sorry ich habe den Proxmox vor ein paar jahren nach einen YT Video aufgesetzt und bis dato damit auch gut gefahren aber leider kenne ich mich viel zu wenig damit aus.

könnt ihr mir helfen?

Christian
 
Code:
root@proxmox:~# lxc-start -lDEBUG -o YOURLOGFILE.log -F -n 101
lxc-start: 101: ../src/lxc/conf.c: run_buffer: 322 Script exited with status 32
lxc-start: 101: ../src/lxc/start.c: lxc_init: 844 Failed to run lxc.hook.pre-start for container "101"
lxc-start: 101: ../src/lxc/start.c: __lxc_start: 2027 Failed to initialize container "101"
lxc-start: 101: ../src/lxc/conf.c: run_buffer: 322 Script exited with status 1
lxc-start: 101: ../src/lxc/start.c: lxc_end: 985 Failed to run lxc.hook.post-stop for container "101"
lxc-start: 101: ../src/lxc/tools/lxc_start.c: main: 306 The container failed to start
lxc-start: 101: ../src/lxc/tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options

Code:
root@proxmox:~# lxc-start -lDEBUG -o YOURLOGFILE.log -F -n 976
lxc-start: 976: ../src/lxc/conf.c: run_buffer: 322 Script exited with status 32
lxc-start: 976: ../src/lxc/start.c: lxc_init: 844 Failed to run lxc.hook.pre-start for container "976"
lxc-start: 976: ../src/lxc/start.c: __lxc_start: 2027 Failed to initialize container "976"
lxc-start: 976: ../src/lxc/conf.c: run_buffer: 322 Script exited with status 1
lxc-start: 976: ../src/lxc/start.c: lxc_end: 985 Failed to run lxc.hook.post-stop for container "976"
lxc-start: 976: ../src/lxc/tools/lxc_start.c: main: 306 The container failed to start
lxc-start: 976: ../src/lxc/tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options

hier liegt der hase begraben oder ?


Code:
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Size of logical volume pve/vm-976-disk-1 changed from 1.03 TiB (271104 extents) to <1.04 TiB (271872 extents).
  Logical volume pve/vm-976-disk-1 successfully resized.
  WARNING: Sum of all thin volume sizes (<2.09 TiB) exceeds the size of thin pool pve/data and the size of whole volume group (232.38 GiB).
e2fsck 1.46.5 (30-Dec-2021)
MMP check failed: If you are sure the filesystem is not in use on any node, run:
'tune2fs -f -E clear_mmp /dev/pve/vm-976-disk-1'
MMP_block:
    mmp_magic: 0x4d4d50
    mmp_check_interval: 5
    mmp_sequence: e24d4d50
    mmp_update_date: Tue Jan  7 21:56:22 2025
    mmp_update_time: 1736283382
    mmp_node_name: proxmox
    mmp_device_name: /dev/pve/vm-976-disk-1
e2fsck: MMP: e2fsck being run while checking MMP block

/dev/pve/vm-976-disk-1: ********** WARNING: Filesystem still has errors **********

Failed to update the container's filesystem: command 'e2fsck -f -y /dev/pve/vm-976-disk-1' failed: exit code 12

TASK OK
 
Last edited:
hier noch eine info


Code:
root@proxmox:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 232.9G  0 disk
├─sda1                         8:1    0  1007K  0 part
├─sda2                         8:2    0   512M  0 part
└─sda3                         8:3    0 232.4G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    58G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   1.5G  0 lvm 
  │ └─pve-data-tpool         253:4    0 147.4G  0 lvm 
  │   ├─pve-data             253:5    0 147.4G  1 lvm 
  │   ├─pve-vm--101--disk--0 253:6    0     8G  0 lvm 
  │   ├─pve-vm--102--disk--0 253:7    0     8G  0 lvm 
  │   └─pve-vm--976--disk--1 253:8    0     1T  0 lvm 
  └─pve-data_tdata           253:3    0 147.4G  0 lvm 
    └─pve-data-tpool         253:4    0 147.4G  0 lvm 
      ├─pve-data             253:5    0 147.4G  1 lvm 
      ├─pve-vm--101--disk--0 253:6    0     8G  0 lvm 
      ├─pve-vm--102--disk--0 253:7    0     8G  0 lvm 
      └─pve-vm--976--disk--1 253:8    0     1T  0 lvm
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!