After Kernel-Upgrade many containers not start

Virtualizer

Active Member
Dec 19, 2011
90
5
28
I have upgrade today to the new kernel and massive problems begun! Before was all ok!

uname -a
Linux pm20-host 4.13.13-5-pve #1 SMP PVE 4.13.13-38 (Fri, 26 Jan 2018 10:47:09 +0100) x86_64 GNU/Linux

A lot of containers start normal and others cant start and I see in log file:

Jan 27 09:18:50 pm20-host systemd-udevd[7203]: Could not generate persistent MAC address for vethEA453P: No such file or directory
Jan 27 09:18:50 pm20-host systemd-udevd[7274]: Could not generate persistent MAC address for fwbr1226i0: No such file or directory
Jan 27 09:18:50 pm20-host systemd-udevd[7283]: Could not generate persistent MAC address for fwpr1226p0: No such file or directory
Jan 27 09:18:50 pm20-host systemd-udevd[7284]: Could not generate persistent MAC address for fwln1226i0: No such file or directory
Jan 27 09:18:50 pm20-host lxc-start[7154]: lxc-start: 1226: lxccontainer.c: wait_on_daemonized_start: 760 Received container state "STOPPING" instead of "RUNNING"
Jan 27 09:18:50 pm20-host lxc-start[7154]: lxc-start: 1226: tools/lxc_start.c: main: 371 The container failed to start.
Jan 27 09:18:50 pm20-host lxc-start[7154]: lxc-start: 1226: tools/lxc_start.c: main: 373 To get more details, run the container in foreground mode.
Jan 27 09:18:50 pm20-host lxc-start[7154]: lxc-start: 1226: tools/lxc_start.c: main: 375 Additional information can be obtained by setting the --logfile and --logpriority options.
Jan 27 09:18:50 pm20-host systemd[1]: pve-container@1226.service: Control process exited, code=exited status=1
Jan 27 09:18:50 pm20-host systemd[1]: pve-container@1226.service: Killing process 7156 (lxc-start) with signal SIGKILL.
Jan 27 09:18:50 pm20-host systemd[1]: pve-container@1226.service: Killing process 7390 (sh) with signal SIGKILL.
Jan 27 09:18:50 pm20-host systemd[1]: Failed to start PVE LXC Container: 1226.
Jan 27 09:18:50 pm20-host systemd[1]: pve-container@1226.service: Unit entered failed state.
Jan 27 09:18:50 pm20-host systemd[1]: pve-container@1226.service: Failed with result 'exit-code'.
Jan 27 09:18:50 pm20-host pve-guests[7149]: command 'systemctl start pve-container@1226' failed: exit code 1
Jan 27 09:18:50 pm20-host pve-guests[7403]: starting CT 1235: UPID:pm20-host:00001CEB:0000857D:5A6C35EA:vzstart:1235:root@pam:
Jan 27 09:18:50 pm20-host systemd[1]: Starting PVE LXC Container: 1235...
Jan 27 09:18:51 pm20-host systemd-udevd[8196]: Could not generate persistent MAC address for vethENT2FR: No such file or directory
Jan 27 09:18:51 pm20-host systemd-udevd[8338]: Could not generate persistent MAC address for fwbr1235i0: No such file or directory
Jan 27 09:18:51 pm20-host systemd-udevd[8350]: Could not generate persistent MAC address for fwpr1235p0: No such file or directory
Jan 27 09:18:51 pm20-host systemd-udevd[8351]: Could not generate persistent MAC address for fwln1235i0: No such file or directory
Jan 27 09:18:51 pm20-host lxc-start[7406]: lxc-start: 1235: lxccontainer.c: wait_on_daemonized_start: 760 Received container state "STOPPING" instead of "RUNNING"
Jan 27 09:18:51 pm20-host lxc-start[7406]: lxc-start: 1235: tools/lxc_start.c: main: 371 The container failed to start.
Jan 27 09:18:51 pm20-host lxc-start[7406]: lxc-start: 1235: tools/lxc_start.c: main: 373 To get more details, run the container in foreground mode.
Jan 27 09:18:51 pm20-host lxc-start[7406]: lxc-start: 1235: tools/lxc_start.c: main: 375 Additional information can be obtained by setting the --logfile and --logpriority options.
Jan 27 09:18:51 pm20-host systemd[1]: pve-container@1235.service: Control process exited, code=exited status=1
Jan 27 09:18:51 pm20-host systemd[1]: pve-container@1235.service: Killing process 7408 (lxc-start) with signal SIGKILL.
Jan 27 09:18:51 pm20-host systemd[1]: pve-container@1235.service: Killing process 8451 (sh) with signal SIGKILL.
Jan 27 09:18:51 pm20-host systemd[1]: Failed to start PVE LXC Container: 1235.
Jan 27 09:18:51 pm20-host systemd[1]: pve-container@1235.service: Unit entered failed state.
Jan 27 09:18:51 pm20-host systemd[1]: pve-container@1235.service: Failed with result 'exit-code'.
Jan 27 09:18:51 pm20-host pve-guests[7403]: command 'systemctl start pve-container@1235' failed: exit code 1

So, I like to change to the old kernel, but how can I doe? I have test in /etc/default/grub - change default to 1 - and update-grub, but this helps not!

Thanks

Virtualizer
 
I have now startet in old kernel-mode and thats the same!

Kernel uname -a
Linux pm20-host 4.13.13-4-pve #1 SMP PVE 4.13.13-35 (Mon, 8 Jan 2018 10:26:58 +0100) x86_64 GNU/Linux

I search more - has somewhere an idea?
 
So I found a new big horrible Proxmox Bug and a workarround:

From one to other versions in PVE 5.x Containers not start, when in the Network-Configuration is set a speed limit (rate limit of network interface)! Where the same problem get from one day to another, here is the workarround:

In the GUI where the container not start, edit Container -> Network -> Network-Interface and their delete the counter (to a blank field), then you get "unlimited". Do the same with 2nd or more interfaces. After this, you can start the Container!

I hope this helps others too and I will write a bug report!

Regards

Detlef
 
  • Like
Reactions: andaris
you can downgrade iproute as an interim measure:

Code:
apt-get install --reinstall iproute2=4.10.0-1

updated libpve-common-perl or iproute2 packages should be available soon.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!