After rebooting the node does boot LXC

ssv771

New Member
Nov 21, 2019
4
0
1
45
After rebooting the node does boot LXC
It does not give any errors at startup, reports a successful start and immediately stops.

pve-manager/6.0-11/2140ef37

I assume what the problem with the file system.
pct mount 104
mounted CT 104 in '/var/lib/lxc/104/rootfs'
root@pm:~# chroot /var/lib/lxc/104/rootfs
Segmentation fault

the file system in LVM /dev/pve/vm-104-disk-0
How to treat it?


I tried this

root@pm:~# fsck.ext4 -v /dev/pve/vm-104-disk-0
e2fsck 1.44.5 (15-Dec-2018)
MMP interval is 5 seconds and total wait time is 22 seconds. Please wait...
/dev/pve/vm-104-disk-0: clean, 512785/16384000 files, 42247360/65536000 blocks
root@pm:~# fsck.ext4 -vf /dev/pve/vm-104-disk-0
e2fsck 1.44.5 (15-Dec-2018)
MMP interval is 5 seconds and total wait time is 22 seconds. Please wait...
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

512785 inodes used (3.13%, out of 16384000)
3587 non-contiguous files (0.7%)
267 non-contiguous directories (0.1%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 504932/253
42247360 blocks used (64.46%, out of 65536000)
0 bad blocks
2 large files

467036 regular files
37979 directories
2 character device files
0 block device files
0 fifos
101 links
7725 symbolic links (7556 fast symbolic links)
34 sockets
------------
512877 files
root@pm:~# pct fsck 104
fsck from util-linux 2.33.1
/dev/mapper/pve-vm--104--disk--0: clean, 512785/16384000 files, 42247360/65536000 blocks
 
proxmox-ve: 6.0-2 (running kernel: 5.0.21-4-pve)
pve-manager: 6.0-11 (running version: 6.0-11/2140ef37)
pve-kernel-helper: 6.0-11
pve-kernel-5.0: 6.0-10
pve-kernel-5.0.21-4-pve: 5.0.21-8
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-3
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-6
libpve-guest-common-perl: 3.0-2
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-8
pve-cluster: 6.0-7
pve-container: 3.0-10
pve-docs: 6.0-8
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-4
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-4
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-13
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

arch: amd64
cores: 1
hostname: hs2
memory: 4096
nameserver: 127.0.0.1
net0: bridge=vmbr1,name=eth0,ip=XXX.XXX.XX.XX/29,gw=XXX.XXX.XX.XX,firewall=1
onboot: 1
ostype: debian
rootfs: local-lvm:vm-104-disk-0,size=250G
swap: 4096
 

Attachments

Before the crash the system was installed updates
status installed libc-bin:amd64 2.24-11+deb9u4
System in LXC
Linux XX 5.0.21-4-pve #1 SMP PVE 5.0.21-8 (Wed, 23 Oct 2019 17:49:13 +0200) x86_64 GNU/Linux
"Debian GNU/Linux 9 (stretch)"
 
hi,


* to see why chroot segfaults (it's probably /bin/bash inside the container segfaulting), you can use strace and gdb to debug it.
* you could also try to run /bin/bash in the container rootfs from your PVE host and see if it segfaults again.
* check dmesg output while starting/stopping container for hints
 
hi!
in dmesg
[168638.480592] init[59329]: segfault at 0 ip 0000000000000000 sp 00007ffeecb147a8 error 14 in systemd[55f23b3ba000+ed000]

by strace, the last run was libc.so. 6
 

Attachments