LXC: problem with monitor socket, but continuing anyway: got timeout

Oct 17, 2022
2
0
1
Ever since I upgraded to proxmox 7, I am unable to run LXC containers. Shell access from GUI reports "Connection Failed (error 1006)". When running the container manually, I get the following error:

Code:
root@proxmox1:~# pct start 100 --debug
problem with monitor socket, but continuing anyway: got timeout

main: 256 Container is already running
dmaps:2267 - Read uid map: type u nsid 0 hostid 100000 range 65536
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2267 - Read uid map: type g nsid 0 hostid 100000 range 65536
ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:256 - Container is already running
root@proxmox1:~# pct stop 100
CT 100 not running

root@proxmox1:~#  pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.60-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-helper: 7.2-12
pve-kernel-5.15: 7.2-11
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-15
pve-kernel-5.15.60-1-pve: 5.15.60-1
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.11.22-1-edge: 5.11.22-1
pve-kernel-5.4.174-2-pve: 5.4.174-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-3
libpve-guest-common-perl: 4.1-3
libpve-http-server-perl: 4.1-4
libpve-storage-perl: 7.2-10
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u1
proxmox-backup-client: 2.2.6-1
proxmox-backup-file-restore: 2.2.6-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-4
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1
root@proxmox1:~# apparmor_parser --version
AppArmor parser version 2.13.6
Copyright (C) 1999-2008 Novell Inc.
Copyright 2009-2018 Canonical Ltd.

root@proxmox1:~# systemctl status apparmor.service
● apparmor.service - Load AppArmor profiles
     Loaded: loaded (/lib/systemd/system/apparmor.service; enabled; vendor preset: enabled)
     Active: active (exited) since Mon 2022-10-17 01:26:38 CEST; 1h 3min ago
       Docs: man:apparmor(7)
             https://gitlab.com/apparmor/apparmor/wikis/home/
    Process: 1026 ExecStart=/lib/apparmor/apparmor.systemd reload (code=exited, status=0/SUCCESS)
   Main PID: 1026 (code=exited, status=0/SUCCESS)
        CPU: 30ms

root@proxmox1:~# sudo journalctl -fx
-- Journal begins at Thu 2022-02-17 19:16:03 CET. --
Oct 17 01:26:38 proxmox1 systemd[1]: Starting Load AppArmor profiles...
Oct 17 01:26:38 proxmox1 apparmor.systemd[1026]: Restarting AppArmor
Oct 17 01:26:38 proxmox1 apparmor.systemd[1026]: Reloading AppArmor profiles
Oct 17 01:26:38 proxmox1 systemd[1]: Finished Load AppArmor profiles.
Oct 17 02:41:13 proxmox1 pvedaemon[29394]: starting lxc termproxy UPID:proxmox1:000072D2:0006D5A9:634CA4A9:vncproxy:100:root@pam:
Oct 17 02:41:13 proxmox1 pvedaemon[1669]: <root@pam> starting task UPID:proxmox1:000072D2:0006D5A9:634CA4A9:vncproxy:100:root@pam:
Oct 17 02:41:16 proxmox1 pvedaemon[1670]: <root@pam> starting task UPID:proxmox1:000072D6:0006D6C6:634CA4AC:vzstart:100:root@pam:
Oct 17 02:41:16 proxmox1 pvedaemon[29398]: starting CT 100: UPID:proxmox1:000072D6:0006D6C6:634CA4AC:vzstart:100:root@pam:
Oct 17 02:41:16 proxmox1 systemd[1]: Started PVE LXC Container: 100.
░░ Subject: A start job for unit pve-container@100.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit pve-container@100.service has finished successfully.
░░
░░ The job identifier is 3650.
Oct 17 02:41:16 proxmox1 audit[29402]: AVC apparmor="DENIED" operation="create" profile="/usr/bin/lxc-start" pid=29402 comm="lxc-start" family="unix" sock_type="stream" protocol=0 requested_mask="create" denied_mask="create" addr=none
Oct 17 02:41:16 proxmox1 audit[29402]: AVC apparmor="DENIED" operation="create" profile="/usr/bin/lxc-start" pid=29402 comm="lxc-start" family="unix" sock_type="stream" protocol=0 requested_mask="create" denied_mask="create" addr=none
Oct 17 02:41:16 proxmox1 kernel: audit: type=1400 audit(1665967276.217:41): apparmor="DENIED" operation="create" profile="/usr/bin/lxc-start" pid=29402 comm="lxc-start" family="unix" sock_type="stream" protocol=0 requested_mask="create" denied_mask="create" addr=none
Oct 17 02:41:16 proxmox1 kernel: audit: type=1400 audit(1665967276.217:42): apparmor="DENIED" operation="create" profile="/usr/bin/lxc-start" pid=29402 comm="lxc-start" family="unix" sock_type="stream" protocol=0 requested_mask="create" denied_mask="create" addr=none
Oct 17 02:41:16 proxmox1 systemd[1]: pve-container@100.service: Succeeded.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit pve-container@100.service has successfully entered the 'dead' state.
Oct 17 02:41:23 proxmox1 pvedaemon[29394]: command '/usr/bin/termproxy 5900 --path /vms/100 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole100 -r winch -z lxc-console -n 100 -e -1' failed: exit code 1
Oct 17 02:41:23 proxmox1 pvedaemon[1669]: <root@pam> end task UPID:proxmox1:000072D2:0006D5A9:634CA4A9:vncproxy:100:root@pam: command '/usr/bin/termproxy 5900 --path /vms/100 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole100 -r winch -z lxc-console -n 100 -e -1' failed: exit code 1
Oct 17 02:41:23 proxmox1 pvedaemon[1669]: <root@pam> starting task UPID:proxmox1:0000730E:0006D99A:634CA4B3:vncproxy:100:root@pam:
Oct 17 02:41:23 proxmox1 pvedaemon[29454]: starting lxc termproxy UPID:proxmox1:0000730E:0006D99A:634CA4B3:vncproxy:100:root@pam:
Oct 17 02:41:26 proxmox1 pvedaemon[29398]: problem with monitor socket, but continuing anyway: got timeout
Oct 17 02:41:26 proxmox1 pvedaemon[1670]: <root@pam> end task UPID:proxmox1:000072D6:0006D6C6:634CA4AC:vzstart:100:root@pam: OK
Oct 17 02:41:33 proxmox1 pvedaemon[29454]: command '/usr/bin/termproxy 5900 --path /vms/100 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole100 -r winch -z lxc-console -n 100 -e -1' failed: exit code 1
Oct 17 02:41:33 proxmox1 pvedaemon[1669]: <root@pam> end task UPID:proxmox1:0000730E:0006D99A:634CA4B3:vncproxy:100:root@pam: command '/usr/bin/termproxy 5900 --path /vms/100 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole100 -r winch -z lxc-console -n 100 -e -1' failed: exit code 1

Proxmox node is standalone and not in cluster mode. I've read about apparmor-related issues in other threads but I can't quite grasp how to troubleshoot that (if the problem is there at all).

Any help is most welcome. Thank you!
 
Last edited:
Its a bit late, but i stumbled upon the same thing.... I've changed the mac adressses of my network devices and it worked out. maybe you've cloned the container and this could be the issue? but i'm not really a professional :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!