command 'lxc-stop -n $VMID --nokill --timeout 60' failed: exit code 1

lecbm

New Member
Apr 26, 2023
7
0
1
Hallo!

auf einem meiner PVE nodes habe ich seit kurzer Zeit das Problem dass ich container nicht mehr über die WebUI (fehlerfrei) herunterfahren kann. Durch ein klick auf "shutdown" wird das Herunterfahren des Containers zwar gestartet und man sieht den aktiven Task im Task Log, allerdings kommt nach 60 Sekunden der Fehler:

Code:
command 'lxc-stop -n $VMID --nokill --timeout 60' failed: exit code 1
TASK OK

Der Container ist danach allerdings auch gestoppt.

Wenn ich den Befehl zum stoppen manuell über ein Terminal ausführe mit LOGLEVEL TRACE, kommt das folgende:

Bash:
# lxc-stop -n $VMID --nokill --timeout 60 --logfile=out.log --logpriority=TRACE
# cat out.log
lxc-stop $VMID 20240611113413.450 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-stop $VMID 20240611113413.450 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-stop $VMID 20240611113413.451 DEBUG    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:148 - Command "get_init_pid" received response
lxc-stop $VMID 20240611113413.451 TRACE    commands - ../src/lxc/commands.c:lxc_cmd_timeout:532 - Opened new command socket connection fd 4 for command "get_init_pid"
lxc-stop $VMID 20240611113413.451 DEBUG    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:148 - Command "get_state" received response
lxc-stop $VMID 20240611113413.451 TRACE    commands - ../src/lxc/commands.c:lxc_cmd_timeout:532 - Opened new command socket connection fd 4 for command "get_state"
lxc-stop $VMID 20240611113413.451 DEBUG    commands - ../src/lxc/commands.c:lxc_cmd_get_state:1080 - Container "254100018" is in "RUNNING" state
lxc-stop $VMID 20240611113413.452 DEBUG    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:148 - Command "get_state" received response
lxc-stop $VMID 20240611113413.452 TRACE    commands - ../src/lxc/commands.c:lxc_cmd_timeout:532 - Opened new command socket connection fd 4 for command "get_state"
lxc-stop $VMID 20240611113413.452 DEBUG    commands - ../src/lxc/commands.c:lxc_cmd_get_state:1080 - Container "254100018" is in "RUNNING" state
lxc-stop $VMID 20240611113413.452 DEBUG    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:137 - Received exact number of file descriptors 1 == 1 for command "get_init_pidfd"
lxc-stop $VMID 20240611113413.452 DEBUG    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv:255 - Finished processing "get_init_pidfd" with file descriptor 5
lxc-stop $VMID 20240611113413.452 TRACE    commands - ../src/lxc/commands.c:lxc_cmd_timeout:532 - Opened new command socket connection fd 4 for command "get_init_pidfd"
lxc-stop $VMID 20240611113413.452 DEBUG    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:148 - Command "get_init_pid" received response
lxc-stop $VMID 20240611113413.452 TRACE    commands - ../src/lxc/commands.c:lxc_cmd_timeout:532 - Opened new command socket connection fd 4 for command "get_init_pid"
lxc-stop $VMID 20240611113413.453 DEBUG    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:148 - Command "add_state_client" received response
lxc-stop $VMID 20240611113413.453 TRACE    commands - ../src/lxc/commands.c:lxc_cmd_timeout:532 - Opened new command socket connection fd 4 for command "add_state_client"
lxc-stop $VMID 20240611113413.453 TRACE    commands - ../src/lxc/commands.c:lxc_cmd_add_state_client:1422 - State connection fd 4 ready to listen for container state changes
lxc-stop $VMID 20240611113413.453 TRACE    lxccontainer - ../src/lxc/lxccontainer.c:do_lxcapi_shutdown:2085 - Sent signal 37 to pidfd 5

Leider ist das für mich wenig hilfreich.

Hat jemand eine Idee wie ich weiter debuggen kann?

PS: Das Problem tritt nur auf einer Node im Cluster auf.


Weitere Informationen:

pveversion --verbose
proxmox-ve: 8.2.0 (running kernel: 6.8.4-2-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-9
proxmox-kernel-6.8: 6.8.4-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
pve-kernel-5.15.131-2-pve: 5.15.131-3
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.0-1
proxmox-backup-file-restore: 3.2.0-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.1
pve-cluster: 8.0.6
pve-container: 5.0.11
pve-docs: 8.2.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.5
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!