[SOLVED] LXC Container Fails To Start

infinityM

Well-Known Member
Dec 7, 2019
179
1
58
31
Hey Guys,

I hope someone can help me...
I have a Centos LXC Container running on my Proxmox Cluster.

All of the sudden it does not want to start.
When I start it the log reports the following:
Job for pve-container@106.service failed because the control process exited with error code.
See "systemctl status pve-container@106.service" and "journalctl -xe" for details.
TASK ERROR: command 'systemctl start pve-container@106' failed: exit code 1

When I do run the status command, I get the following output:

pve-container@106.service - PVE LXC Container: 106
Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2020-03-01 23:24:12 SAST; 2min 1s ago
Docs: man:lxc-start
man:lxc
man:pct
Process: 3919 ExecStart=/usr/bin/lxc-start -n 106 (code=exited, status=1/FAILURE)

Mar 01 23:24:10 c4 systemd[1]: Starting PVE LXC Container: 106...
Mar 01 23:24:12 c4 lxc-start[3919]: lxc-start: 106: lxccontainer.c: wait_on_daemonized_start: 865 No such file or directory - Failed to receive the container s
Mar 01 23:24:12 c4 lxc-start[3919]: lxc-start: 106: tools/lxc_start.c: main: 329 The container failed to start
Mar 01 23:24:12 c4 lxc-start[3919]: lxc-start: 106: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode
Mar 01 23:24:12 c4 lxc-start[3919]: lxc-start: 106: tools/lxc_start.c: main: 335 Additional information can be obtained by setting the --logfile and --logprior
Mar 01 23:24:12 c4 systemd[1]: pve-container@106.service: Control process exited, code=exited, status=1/FAILURE
Mar 01 23:24:12 c4 systemd[1]: pve-container@106.service: Failed with result 'exit-code'.
Mar 01 23:24:12 c4 systemd[1]: Failed to start PVE LXC Container: 106

Then I did notice another command in one of the threads that does provide a bit more info, But not sure how to process it...

Command: lxc-start -n 106 -F
Output:
lxc-start: 106: conf.c: run_buffer: 352 Script exited with status 25
lxc-start: 106: start.c: lxc_init: 897 Failed to run lxc.hook.pre-start for container "106"
lxc-start: 106: start.c: __lxc_start: 2032 Failed to initialize container "106"
Segmentation fault

Does anyone have advise for me?
 
damn, i thought I was special. I'm getting the same error. I can create a new Debian file server container just fine. As soon as I pass block storage to it via CLI, and then try and restart.. i get the same error. I've been scratching my head since yesterday ... I'll add that I CAN remove the mount point I added to the LXC container via CLI and THEN the container will start. But not with the mount point added. I see the same error when trying to start the container with the mount point....
865 No such file or directory - Failed to receive the container status
 
Last edited:
hi,

we can help you better if you provide the following info:

* pveversion -v
* pct config CTID
* debug logs from CT start[0] (attach/paste the file with the debugging log)
* also maybe the contents of /etc/pve/storage.cfg



[0]: https://pve.proxmox.com/wiki/Linux_Container#_obtaining_debugging_logs
Hey bud thank you for the quick reply...

After struggling for about an hour, I read an article from way back where someone said they updated proxmox and it resolved the issue...
So I did the same, and voila.

I will say this, If I migrate the lxc to another server which has not been updated yet, it's giving me the exact same error...

So I assume there's a bug somewhere...
 
On ZFS i was able to resolve this by manualy mounting the LXC ZFS vols

zfs mount rpool/data/subvol-100-disk-0
zfs mount rpool/data/subvol-102-disk-0

Need to figure out why this doesn't work at boot though
 
It was happening because of the server it was on. The server was not able to update properly, which then meant if a vps is moved there, and then downed to reboot, it never came back up and you could not migrate it away...
Found a work-around by just downing the server to let the HA migrate the containers away though.

Did fix the update issue thanks to another thread :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!