Unable to shutdown/stop lxc container

TwiX

Active Member
Feb 3, 2015
294
21
38
Hi,

proxmox-ve: 5.2-2 (running kernel: 4.15.18-7-pve)
pve-manager: 5.2-9 (running version: 5.2-9/4b30e8f9)
pve-kernel-4.15: 5.2-10
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.13-1-pve: 4.13.13-31
pve-kernel-4.13.4-1-pve: 4.13.4-26
ceph: 12.2.8-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-40
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-30
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-2
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-20
pve-cluster: 5.0-30
pve-container: 2.0-28
pve-docs: 5.2-8
pve-firewall: 3.0-14
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-36
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.11-pve1~bpo1

I first tried to shutdown the container but it returned a timeout (60s). Then tried to stop but also failed.

Code:
pct list
VMID       Status     Lock         Name
50011      running                 dc-btsip-01

So I decided to kill the related lxc process but wasn't been able to find it !

Code:
ps aux | grep 50011
root       17754  0.0  0.0  48228  3640 ?        Ss   Jan30   2:31 [lxc monitor] /var/lib/lxc 50011
root     3913615  0.0  0.0  12788   932 pts/2    S+   18:51   0:00 grep 50011

The weird thing is that CT 50011 is still responding to ping but I'm not able to enter the console.

What could I do ?

Thanks in advanced !

Antoine
 

TwiX

Active Member
Feb 3, 2015
294
21
38
I also forgot to mention that the root storage (ceph) is available
 

denos

Active Member
Jul 27, 2015
82
39
38
Based on your listing: Yes, 17754 is the process you want to kill if the nice ways of shutting down the container have failed.
 

TwiX

Active Member
Feb 3, 2015
294
21
38
Is there a way to shutdown 50011 veth interfaces ?

Code:
 ip a | grep 50011
36: veth50011i0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v50 state UP group default qlen 1000
38: veth50011i1@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v193 state UP group default qlen 1000
 

TwiX

Active Member
Feb 3, 2015
294
21
38
Thanks,

I tried but

Code:
ip link delete veth50011i0@if35
Error: argument "veth50011i0@if35" is wrong: "dev" not a valid ifname
 

TwiX

Active Member
Feb 3, 2015
294
21
38
Hi,

Reboot node was mandatory to fix the issue.

I deplore that lxc is so shitty.
Ok there is no overhead compared to kvm but there are lot of limitations (live migrate first of all, until recently nfs/cifs mounts...).

IMHO if you want containers you'd better use swarm/kubernetes. (inside a proxmox KVM machine indeed). Today KVM CPU performances is nearly bear metal...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!