Container won't start after restore

ikecomp

Member
Apr 9, 2020
12
1
8
54
My main proxmox drive crashed immediately after throwing some smart errors. So I did a fresh install on a new drive. Everything is up and running at the moment but 1 of my linux containers won't start after it was restored. I was running the latest 7.x version before installing 8.0.3 today. So the backups are from that version

I ran the command below to generate a log file

I've attached it. Hoping someone could toss me a bone on what could be causing the issue. I didn't see anything that was extremely obvious in the log file except maybe this portion


Code:
lxc-start 300 20230830222327.970 NOTICE   start - ../src/lxc/start.c:post_start:2205 - Started "/sbin/init" with pid "7450"
lxc-start 300 20230830222327.970 NOTICE   start - ../src/lxc/start.c:signal_handler:446 - Received 17 from pid 7446 instead of container init 7450
lxc-start 300 20230830222327.971 DEBUG    start - ../src/lxc/start.c:signal_handler:464 - Container init process 7450 exited
lxc-start 300 20230830222327.971 DEBUG    start - ../src/lxc/start.c:__lxc_start:2147 - Illegal instruction(4) - Container "300" init exited
lxc-start 300 20230830222327.971 INFO     error - ../src/lxc/error.c:lxc_error_set_and_log:34 - Child <7450> ended on signal Illegal instruction(4)



Any helps appreciated
 

Attachments

  • lxc_300.txt
    17.1 KB · Views: 2
Last edited:
Posting the pveversion and 300.conf in case anyone asks


Code:
proxmox-ve: 8.0.2 (running kernel: 6.2.16-10-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
proxmox-kernel-6.2.16-10-pve: 6.2.16-10
proxmox-kernel-6.2: 6.2.16-10
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.5
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.8
libpve-guest-common-perl: 5.0.4
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.2-1
proxmox-backup-file-restore: 3.0.2-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.3
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-4
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

Code:
arch: amd64
cores: 1
hostname: unifi
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=B6:11:78:F1:5C:20,ip=192.168.1.165/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-300-disk-0,size=10G
swap: 512
 
Last edited:
Hi,
please try to see if you manage to get some more information why the init process of the container fails.

You can mount the root filesystem of the container on the host by running pct mount 300, given you have VMID 300 in your case. This will mount the rootfs at /var/lib/lxc/300/rootfs. To get to the journal, perform a chroot /var/lib/lxc/300/rootfs and inspect the logs in reverse order via journalctl -r.

Edit: Note that the Illegal instruction(4) might be related to the bugfixes for the latest CPU vulnerabilities, make sure to install the latest microcode- and firmware-updates on you host system .
 
Last edited:
Hi Chris

Thanks for taking a look.

I got a segmentation fault performing the chroot command. is that expected?

root@pve:~# pct mount 300
mounted CT 300 in '/var/lib/lxc/300/rootfs'
root@pve:~# chroot /var/lib/lxc/300/rootfs
Segmentation fault


Also, as far as the Illegal instructions(4), if there is an issue with the firmware/microcode I would expect it to also affect my other containers. I also created a new container with essentially the same configuration and it works without issue.
 
Last edited:
Hi Chris

Thanks for taking a look.

I got a segmentation fault performing the chroot command. is that expected?

root@pve:~# pct mount 300
mounted CT 300 in '/var/lib/lxc/300/rootfs'
root@pve:~# chroot /var/lib/lxc/300/rootfs
Segmentation fault


Also, as far as the Illegal instructions(4), if there is an issue with the firmware/microcode I would expect it to also affect my other containers. I also created a new container with essentially the same configuration and it works without issue.
Well that does not sound good, the segfault should not happen. Try to unmount the filesystem again via pct unmount 300 and run a pct fsck 300. Then try to start the container once again by running pct start 300 --debug and check if the output changed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!