do_start: 1272 Failed to setup container "100" sync_wait: 34 An error occurred in another process (expected sequence number 4) __lxc_start: 2107 Faile

dengolius

Well-Known Member
Dec 28, 2016
44
6
48
34
Ukraine
t.me
Proxmox 8, freshly installed at this night on hetzner bare metal server https://www.hetzner.com/dedicated-rootserver/ex44 .
Old LXC containers whic I've restored from backup don't work to. The next log and screenshots are from newly created LXC container:


Bash:
lxc-start 110 20230701090146.130 INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
lxc-start 110 20230701090146.130 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "110", config section "lxc"
lxc-start 110 20230701090146.402 DEBUG    seccomp - ../src/lxc/seccomp.c:parse_config_v2:656 - Host native arch is [3221225534]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "[all]"
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "kexec_load errno 1"
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[246:kexec_load] action[327681:errno] arch[0]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741827]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741886]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "open_by_handle_at errno 1"
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[304:open_by_handle_at] action[327681:errno] arch[0]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741827]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741886]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "init_module errno 1"
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[175:init_module] action[327681:errno] arch[0]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741827]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741886]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "finit_module errno 1"
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[313:finit_module] action[327681:errno] arch[0]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741827]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741886]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:807 - Processing "delete_module errno 1"
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[176:delete_module] action[327681:errno] arch[0]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741827]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741886]
lxc-start 110 20230701090146.402 INFO     seccomp - ../src/lxc/seccomp.c:parse_config_v2:1017 - Merging compat seccomp contexts into main context
lxc-start 110 20230701090146.504 INFO     start - ../src/lxc/start.c:lxc_init:881 - Container "110" is initialized
lxc-start 110 20230701090146.532 INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_monitor_create:1391 - The monitor process uses "lxc.monitor/110" as cgroup
lxc-start 110 20230701090146.564 DEBUG    storage - ../src/lxc/storage/storage.c:storage_query:231 - Detected rootfs type "dir"
lxc-start 110 20230701090146.564 INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_payload_create:1499 - The container process uses "lxc/110/ns" as inner and "lxc/110" as limit cgroup
lxc-start 110 20230701090146.564 INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWNS
lxc-start 110 20230701090146.564 INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWPID
lxc-start 110 20230701090146.564 INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWUTS
lxc-start 110 20230701090146.564 INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWIPC
lxc-start 110 20230701090146.564 INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWNET
lxc-start 110 20230701090146.564 INFO     start - ../src/lxc/start.c:lxc_spawn:1762 - Cloned CLONE_NEWCGROUP
lxc-start 110 20230701090146.564 DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved mnt namespace via fd 18 and stashed path as mnt:/proc/103677/fd/18
lxc-start 110 20230701090146.564 DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved pid namespace via fd 19 and stashed path as pid:/proc/103677/fd/19
lxc-start 110 20230701090146.564 DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved uts namespace via fd 20 and stashed path as uts:/proc/103677/fd/20
lxc-start 110 20230701090146.564 DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved ipc namespace via fd 21 and stashed path as ipc:/proc/103677/fd/21
lxc-start 110 20230701090146.564 DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved net namespace via fd 22 and stashed path as net:/proc/103677/fd/22
lxc-start 110 20230701090146.564 DEBUG    start - ../src/lxc/start.c:lxc_try_preserve_namespace:139 - Preserved cgroup namespace via fd 23 and stashed path as cgroup:/proc/103677/fd/23
lxc-start 110 20230701090146.564 WARN     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_setup_limits_legacy:3155 - Invalid argument - Ignoring legacy cgroup limits on pure cgroup2 system
lxc-start 110 20230701090146.564 INFO     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_setup_limits:3251 - Limits for the unified cgroup hierarchy have been setup
lxc-start 110 20230701090146.567 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/lxcnetaddbr" for container "110", config section "net"
lxc-start 110 20230701090146.829 DEBUG    network - ../src/lxc/network.c:netdev_configure_server_veth:852 - Instantiated veth tunnel "veth110i0 <--> vethBCoykl"
lxc-start 110 20230701090146.830 DEBUG    conf - ../src/lxc/conf.c:lxc_mount_rootfs:1437 - Mounted rootfs "/var/lib/lxc/110/rootfs" onto "/usr/lib/x86_64-linux-gnu/lxc/rootfs" with options "(null)"
lxc-start 110 20230701090146.830 INFO     conf - ../src/lxc/conf.c:setup_utsname:876 - Set hostname to "ox3-back-sellinterfacecom"
lxc-start 110 20230701090146.868 DEBUG    network - ../src/lxc/network.c:setup_hw_addr:3821 - Mac address "1E:4B:76:15:63:A1" on "eth0" has been setup
lxc-start 110 20230701090146.868 DEBUG    network - ../src/lxc/network.c:lxc_network_setup_in_child_namespaces_common:3962 - Network device "eth0" has been setup
lxc-start 110 20230701090146.868 INFO     network - ../src/lxc/network.c:lxc_setup_network_in_child_namespaces:4019 - Finished setting up network devices with caller assigned names
lxc-start 110 20230701090146.868 INFO     conf - ../src/lxc/conf.c:mount_autodev:1220 - Preparing "/dev"
lxc-start 110 20230701090146.868 INFO     conf - ../src/lxc/conf.c:mount_autodev:1281 - Prepared "/dev"
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:lxc_mount_auto_mounts:736 - Invalid argument - Tried to ensure procfs is unmounted
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:lxc_mount_auto_mounts:759 - Invalid argument - Tried to ensure sysfs is unmounted
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2445 - Remounting "/sys/fs/fuse/connections" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/fuse/connections" to respect bind or remount options
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2464 - Flags for "/sys/fs/fuse/connections" were 4110, required extra flags are 14
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2508 - Mounted "/sys/fs/fuse/connections" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/fuse/connections" with filesystem type "none"
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2445 - Remounting "/sys/kernel/debug" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/kernel/debug" to respect bind or remount options
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2464 - Flags for "/sys/kernel/debug" were 4110, required extra flags are 14
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2508 - Mounted "/sys/kernel/debug" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/kernel/debug" with filesystem type "none"
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2445 - Remounting "/sys/kernel/security" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/kernel/security" to respect bind or remount options
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2464 - Flags for "/sys/kernel/security" were 4110, required extra flags are 14
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2508 - Mounted "/sys/kernel/security" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/kernel/security" with filesystem type "none"
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2445 - Remounting "/sys/fs/pstore" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/pstore" to respect bind or remount options
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2464 - Flags for "/sys/fs/pstore" were 4110, required extra flags are 14
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2508 - Mounted "/sys/fs/pstore" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/pstore" with filesystem type "none"
lxc-start 110 20230701090146.868 DEBUG    conf - ../src/lxc/conf.c:mount_entry:2508 - Mounted "mqueue" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/dev/mqueue" with filesystem type "mqueue"
lxc-start 110 20230701090146.868 DEBUG    cgfsng - ../src/lxc/cgroups/cgfsng.c:__cgroupfs_mount:1909 - Mounted cgroup filesystem cgroup2 onto 20((null))
lxc-start 110 20230701090146.868 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxcfs/lxc.mount.hook" for container "110", config section "lxc"
lxc-start 110 20230701090146.869 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxcfs/lxc.mount.hook 110 lxc mount produced output: missing /var/lib/lxcfs/proc/ - lxcfs not running?

lxc-start 110 20230701090146.869 ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 1
lxc-start 110 20230701090146.869 ERROR    conf - ../src/lxc/conf.c:lxc_setup:4437 - Failed to run mount hooks
lxc-start 110 20230701090146.869 ERROR    start - ../src/lxc/start.c:do_start:1272 - Failed to setup container "110"
lxc-start 110 20230701090146.869 ERROR    sync - ../src/lxc/sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 4)
lxc-start 110 20230701090146.869 DEBUG    network - ../src/lxc/network.c:lxc_delete_network:4173 - Deleted network devices
lxc-start 110 20230701090146.869 ERROR    start - ../src/lxc/start.c:__lxc_start:2107 - Failed to spawn container "110"
lxc-start 110 20230701090146.869 WARN     start - ../src/lxc/start.c:lxc_abort:1036 - No such process - Failed to send SIGKILL via pidfd 17 for process 103697
lxc-start 110 20230701090147.125 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "110", config section "lxc"
lxc-start 110 20230701090147.263 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "110", config section "lxc"
lxc-start 110 20230701090147.764 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:306 - The container failed to start
lxc-start 110 20230701090147.764 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:311 - Additional information can be obtained by setting the --logfile and --logpriority options
 

Attachments

  • 2023-07-01_12-09.png
    2023-07-01_12-09.png
    70.7 KB · Views: 3
  • 2023-07-01_12-09_1.png
    2023-07-01_12-09_1.png
    33.1 KB · Views: 3
  • 2023-07-01_12-09_2.png
    2023-07-01_12-09_2.png
    35 KB · Views: 3
  • 2023-07-01_12-09_3.png
    2023-07-01_12-09_3.png
    28.1 KB · Views: 3
  • 2023-07-01_12-09_4.png
    2023-07-01_12-09_4.png
    37.5 KB · Views: 4
  • 2023-07-01_12-09_5.png
    2023-07-01_12-09_5.png
    63.4 KB · Views: 4
  • 2023-07-01_12-10.png
    2023-07-01_12-10.png
    39.9 KB · Views: 4
  • 2023-07-01_12-10_1.png
    2023-07-01_12-10_1.png
    28.8 KB · Views: 2
Bash:
systemctl status lxcfs.service


systemctl status lxcfs.service
○ lxcfs.service - FUSE filesystem for LXC
     Loaded: loaded (/lib/systemd/system/lxcfs.service; enabled; preset: enabled)
     Active: inactive (dead)
       Docs: man:lxcfs(1)

So I've started LXC service by running systemctl restart lxcfs.service and issue has gone.


Bash:
[1]: Started lxcfs.service - FUSE filesystem for LXC.
Jul 01 11:15:12 ox2 lxcfs[105414]: Running constructor lxcfs_init to reload liblxcfs
Jul 01 11:15:12 ox2 lxcfs[105414]: mount namespace: 5
Jul 01 11:15:12 ox2 lxcfs[105414]: hierarchies:
Jul 01 11:15:12 ox2 lxcfs[105414]:   0: fd:   6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Jul 01 11:15:12 ox2 lxcfs[105414]: Kernel supports pidfds
Jul 01 11:15:12 ox2 lxcfs[105414]: Kernel supports swap accounting
Jul 01 11:15:12 ox2 lxcfs[105414]: api_extensions:
Jul 01 11:15:12 ox2 lxcfs[105414]: - cgroups
Jul 01 11:15:12 ox2 lxcfs[105414]: - sys_cpu_online
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_cpuinfo
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_diskstats
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_loadavg
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_meminfo
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_stat
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_swaps
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_uptime
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_slabinfo
Jul 01 11:15:12 ox2 lxcfs[105414]: - shared_pidns
Jul 01 11:15:12 ox2 lxcfs[105414]: - cpuview_daemon
Jul 01 11:15:12 ox2 lxcfs[105414]: - loadavg_daemon
Jul 01 11:15:12 ox2 lxcfs[105414]: - pidfds
Jul 01 11:15:12 ox2 lxcfs[105414]: Ignoring invalid max threads value 4294967295 > max (100000).

Question to the Administration and Proxmox source code maintainers:
Looks like there is a bug with auto start lxcfs.service or do we need to start it manually?
 
Last edited:
Hi,
Code:
systemctl status lxcfs.service
○ lxcfs.service - FUSE filesystem for LXC
     Loaded: loaded (/lib/systemd/system/lxcfs.service; enabled; preset: enabled)
     Active: inactive (dead)
       Docs: man:lxcfs(1)
Question to the Administration and Proxmox source code maintainers:
Looks like there is a bug with auto start lxcfs.service or do we need to start it manually?
as you can see the service is enabled, so it should start automatically. Please check the log to see if there was an error
Code:
journalctl -b0 -u lxcfs.service
# if it was on the previous boot use the below command instead
journalctl -b-1 -u lxcfs.service
 
Hi,


as you can see the service is enabled, so it should start automatically. Please check the log to see if there was an error
Code:
journalctl -b0 -u lxcfs.service
# if it was on the previous boot use the below command instead
journalctl -b-1 -u lxcfs.service
As I said before, LXC containers didn't start out-of-the-box after Proxmox has been installed. So I think that there is some bug with installation process.

Bash:
sudo journalctl -b-1 -u lxcfs.service
Jul 01 11:15:12 ox2 systemd[1]: Started lxcfs.service - FUSE filesystem for LXC.
Jul 01 11:15:12 ox2 lxcfs[105414]: Running constructor lxcfs_init to reload liblxcfs
Jul 01 11:15:12 ox2 lxcfs[105414]: mount namespace: 5
Jul 01 11:15:12 ox2 lxcfs[105414]: hierarchies:
Jul 01 11:15:12 ox2 lxcfs[105414]:   0: fd:   6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Jul 01 11:15:12 ox2 lxcfs[105414]: Kernel supports pidfds
Jul 01 11:15:12 ox2 lxcfs[105414]: Kernel supports swap accounting
Jul 01 11:15:12 ox2 lxcfs[105414]: api_extensions:
Jul 01 11:15:12 ox2 lxcfs[105414]: - cgroups
Jul 01 11:15:12 ox2 lxcfs[105414]: - sys_cpu_online
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_cpuinfo
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_diskstats
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_loadavg
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_meminfo
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_stat
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_swaps
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_uptime
Jul 01 11:15:12 ox2 lxcfs[105414]: - proc_slabinfo
Jul 01 11:15:12 ox2 lxcfs[105414]: - shared_pidns
Jul 01 11:15:12 ox2 lxcfs[105414]: - cpuview_daemon
Jul 01 11:15:12 ox2 lxcfs[105414]: - loadavg_daemon
Jul 01 11:15:12 ox2 lxcfs[105414]: - pidfds
Jul 01 11:15:12 ox2 lxcfs[105414]: Ignoring invalid max threads value 4294967295 > max (100000).
Jul 01 19:37:56 ox2 systemd[1]: Stopping lxcfs.service - FUSE filesystem for LXC...
Jul 01 19:37:56 ox2 lxcfs[105414]: Running destructor lxcfs_exit
Jul 01 19:37:56 ox2 systemd[1]: lxcfs.service: Main process exited, code=exited, status=1/FAILURE
Jul 01 19:37:56 ox2 fusermount[209733]: /bin/fusermount: failed to unmount /var/lib/lxcfs: Invalid argument
Jul 01 19:37:56 ox2 systemd[1]: lxcfs.service: Failed with result 'exit-code'.
Jul 01 19:37:56 ox2 systemd[1]: Stopped lxcfs.service - FUSE filesystem for LXC.
Jul 01 19:37:56 ox2 systemd[1]: lxcfs.service: Consumed 14.994s CPU time.

Jul 01 11:15:12 ox2 systemd[1]: Started lxcfs.service - FUSE filesystem for LXC. - FYI: the process was started by me at this time
 
Last edited:
Fresh installation here doesn't exhibit the issue and I'd expect many more reports if it wouldn't be something specific to your setup. Can you check the full system logs for more hints? Does the service fail to start on every boot?
 
Hi, just a small bump on this. I did a clean setup yesterday of

(a) Debian-12 latest minimal on Linux SW Raid stock config
(b) Proxmox-latest on top of this
(c) In a manner I have done in the past without drama; but today is first one with latest (Prox8/Deb12) for me. (ie, in the past I've done Prox 6 or 7, etc this way for custom SW Raid boxes)

I had exactly the same issue, ie, I proceeded on my merry way after install

setup a Win11 VM and it is great

then setup a stock Debian VM_LXC using debian 12 stock template downloaded from proxmox templates source
try to start the LXC and it refuses
on my CLI when I check status of
systemctl status lxcfs.service

I can see it is inactive dead

Once I poke this to start
I can start my LXC just fine
but
what the heck? why is it not started clean at boot?

I will reboot my proxmox host again and dig into logs to see if there is any clear smoking gun making things sad
but

figured I should report this because it is indeed weird

Tim
 
Hi,
on my CLI when I check status of
systemctl status lxcfs.service
Does the status show that the service is enabled?
I will reboot my proxmox host again and dig into logs to see if there is any clear smoking gun making things sad
but
If yes, there should be at least some hint why the service exited/failed during boot. Please share the file created by journalctl -b > /tmp/boot.log for a boot where the issue occurred and the output of pveversion -v.
 
Footnote on my note, I just rebooted the proxmox node, and things were OK - LXC services were fine after reboot and the OpenVPN LXC Debian container I had prepped - is now fine / no drama. I am not entirely clear why the LXC service(s) were in a stop state when I was working on this earlier today. Maybe some transitory glitch, or something else. Great IT_Crowd dude solution. ("Did you turn it off and on again?!" :)

-Tim
 
more footnote, wow your reply was fast. I didn't even note you had replied until after I posted my (above) footnote.
Here is bit more info for ref - not big deal but just to be a bit more complete

Code:
root@proxmox:~# pveversion
pve-manager/8.0.3/bbf3993334bfa916 (running kernel: 6.2.16-5-pve)

and 

root@proxmox:~# pveversion -v
proxmox-ve: 8.0.1 (running kernel: 6.2.16-5-pve)
pve-manager: 8.0.3 (running version: 8.0.3/bbf3993334bfa916)
pve-kernel-6.2: 8.0.4
pve-kernel-6.2.16-5-pve: 6.2.16-6
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx3
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.6
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.4
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.1-1
proxmox-backup-file-restore: 3.0.1-1
proxmox-kernel-helper: 8.0.2
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.2
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1
root@proxmox:~#



and


root@proxmox:~# systemctl status lxcfs.service
● lxcfs.service - FUSE filesystem for LXC
     Loaded: loaded (/lib/systemd/system/lxcfs.service; enabled; preset: enabled)
     Active: active (running) since Thu 2023-07-27 08:46:43 ADT; 10min ago
       Docs: man:lxcfs(1)
   Main PID: 531 (lxcfs)
      Tasks: 3 (limit: 18954)
     Memory: 1020.0K
        CPU: 6ms
     CGroup: /system.slice/lxcfs.service
             └─531 /usr/bin/lxcfs /var/lib/lxcfs

Jul 27 08:46:43 proxmox lxcfs[531]: - proc_meminfo
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_stat
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_swaps
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_uptime
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_slabinfo
Jul 27 08:46:43 proxmox lxcfs[531]: - shared_pidns
Jul 27 08:46:43 proxmox lxcfs[531]: - cpuview_daemon
Jul 27 08:46:43 proxmox lxcfs[531]: - loadavg_daemon
Jul 27 08:46:43 proxmox lxcfs[531]: - pidfds
Jul 27 08:46:43 proxmox lxcfs[531]: Ignoring invalid max threads value 4294967295 > max (100000).
root@proxmox:~#



and then in case this helps at all


root@proxmox:/var/log# journalctl -u lxcfs.service
Jul 27 08:27:58 proxmox systemd[1]: Started lxcfs.service - FUSE filesystem for LXC.
Jul 27 08:27:58 proxmox lxcfs[176276]: Running constructor lxcfs_init to reload liblxcfs
Jul 27 08:27:58 proxmox lxcfs[176276]: mount namespace: 5
Jul 27 08:27:58 proxmox lxcfs[176276]: hierarchies:
Jul 27 08:27:58 proxmox lxcfs[176276]:   0: fd:   6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Jul 27 08:27:58 proxmox lxcfs[176276]: Kernel supports pidfds
Jul 27 08:27:58 proxmox lxcfs[176276]: Kernel supports swap accounting
Jul 27 08:27:58 proxmox lxcfs[176276]: api_extensions:
Jul 27 08:27:58 proxmox lxcfs[176276]: - cgroups
Jul 27 08:27:58 proxmox lxcfs[176276]: - sys_cpu_online
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_cpuinfo
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_diskstats
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_loadavg
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_meminfo
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_stat
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_swaps
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_uptime
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_slabinfo
Jul 27 08:27:58 proxmox lxcfs[176276]: - shared_pidns
Jul 27 08:27:58 proxmox lxcfs[176276]: - cpuview_daemon
Jul 27 08:27:58 proxmox lxcfs[176276]: - loadavg_daemon
Jul 27 08:27:58 proxmox lxcfs[176276]: - pidfds
Jul 27 08:27:58 proxmox lxcfs[176276]: Ignoring invalid max threads value 4294967295 > max (100000).
Jul 27 08:46:12 proxmox systemd[1]: Stopping lxcfs.service - FUSE filesystem for LXC...
Jul 27 08:46:12 proxmox lxcfs[176276]: Running destructor lxcfs_exit
Jul 27 08:46:12 proxmox systemd[1]: lxcfs.service: Main process exited, code=exited, status=1/FAILURE
Jul 27 08:46:12 proxmox fusermount[184419]: /bin/fusermount: failed to unmount /var/lib/lxcfs: Invalid argument
Jul 27 08:46:12 proxmox systemd[1]: lxcfs.service: Failed with result 'exit-code'.
Jul 27 08:46:12 proxmox systemd[1]: Stopped lxcfs.service - FUSE filesystem for LXC.
-- Boot c581a022e30d48ba880d760de1ab83f1 --
Jul 27 08:46:43 proxmox systemd[1]: Started lxcfs.service - FUSE filesystem for LXC.
Jul 27 08:46:43 proxmox lxcfs[531]: Running constructor lxcfs_init to reload liblxcfs
Jul 27 08:46:43 proxmox lxcfs[531]: mount namespace: 5
Jul 27 08:46:43 proxmox lxcfs[531]: hierarchies:
Jul 27 08:46:43 proxmox lxcfs[531]:   0: fd:   6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Jul 27 08:46:43 proxmox lxcfs[531]: Kernel supports pidfds
Jul 27 08:46:43 proxmox lxcfs[531]: Kernel supports swap accounting
Jul 27 08:46:43 proxmox lxcfs[531]: api_extensions:
Jul 27 08:46:43 proxmox lxcfs[531]: - cgroups
Jul 27 08:46:43 proxmox lxcfs[531]: - sys_cpu_online
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_cpuinfo
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_diskstats
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_loadavg
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_meminfo
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_stat
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_swaps
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_uptime
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_slabinfo
Jul 27 08:46:43 proxmox lxcfs[531]: - shared_pidns
Jul 27 08:46:43 proxmox lxcfs[531]: - cpuview_daemon
Jul 27 08:46:43 proxmox lxcfs[531]: - loadavg_daemon
Jul 27 08:46:43 proxmox lxcfs[531]: - pidfds
Jul 27 08:46:43 proxmox lxcfs[531]: Ignoring invalid max threads value 4294967295 > max (100000).
root@proxmox:/var/log#



Thanks!


Tim
 
root@proxmox:/var/log# journalctl -u lxcfs.service
Jul 27 08:27:58 proxmox systemd[1]: Started lxcfs.service - FUSE filesystem for LXC.
Jul 27 08:27:58 proxmox lxcfs[176276]: Running constructor lxcfs_init to reload liblxcfs
Jul 27 08:27:58 proxmox lxcfs[176276]: mount namespace: 5
Jul 27 08:27:58 proxmox lxcfs[176276]: hierarchies:
Jul 27 08:27:58 proxmox lxcfs[176276]: 0: fd: 6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Jul 27 08:27:58 proxmox lxcfs[176276]: Kernel supports pidfds
Jul 27 08:27:58 proxmox lxcfs[176276]: Kernel supports swap accounting
Jul 27 08:27:58 proxmox lxcfs[176276]: api_extensions:
Jul 27 08:27:58 proxmox lxcfs[176276]: - cgroups
Jul 27 08:27:58 proxmox lxcfs[176276]: - sys_cpu_online
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_cpuinfo
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_diskstats
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_loadavg
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_meminfo
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_stat
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_swaps
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_uptime
Jul 27 08:27:58 proxmox lxcfs[176276]: - proc_slabinfo
Jul 27 08:27:58 proxmox lxcfs[176276]: - shared_pidns
Jul 27 08:27:58 proxmox lxcfs[176276]: - cpuview_daemon
Jul 27 08:27:58 proxmox lxcfs[176276]: - loadavg_daemon
Jul 27 08:27:58 proxmox lxcfs[176276]: - pidfds
Jul 27 08:27:58 proxmox lxcfs[176276]: Ignoring invalid max threads value 4294967295 > max (100000).
Jul 27 08:46:12 proxmox systemd[1]: Stopping lxcfs.service - FUSE filesystem for LXC...
Jul 27 08:46:12 proxmox lxcfs[176276]: Running destructor lxcfs_exit
Jul 27 08:46:12 proxmox systemd[1]: lxcfs.service: Main process exited, code=exited, status=1/FAILURE
Jul 27 08:46:12 proxmox fusermount[184419]: /bin/fusermount: failed to unmount /var/lib/lxcfs: Invalid argument
Jul 27 08:46:12 proxmox systemd[1]: lxcfs.service: Failed with result 'exit-code'.
Jul 27 08:46:12 proxmox systemd[1]: Stopped lxcfs.service - FUSE filesystem for LXC.
-- Boot c581a022e30d48ba880d760de1ab83f1 --
Jul 27 08:46:43 proxmox systemd[1]: Started lxcfs.service - FUSE filesystem for LXC.
Jul 27 08:46:43 proxmox lxcfs[531]: Running constructor lxcfs_init to reload liblxcfs
Jul 27 08:46:43 proxmox lxcfs[531]: mount namespace: 5
Jul 27 08:46:43 proxmox lxcfs[531]: hierarchies:
Jul 27 08:46:43 proxmox lxcfs[531]: 0: fd: 6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Jul 27 08:46:43 proxmox lxcfs[531]: Kernel supports pidfds
Jul 27 08:46:43 proxmox lxcfs[531]: Kernel supports swap accounting
Jul 27 08:46:43 proxmox lxcfs[531]: api_extensions:
Jul 27 08:46:43 proxmox lxcfs[531]: - cgroups
Jul 27 08:46:43 proxmox lxcfs[531]: - sys_cpu_online
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_cpuinfo
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_diskstats
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_loadavg
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_meminfo
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_stat
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_swaps
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_uptime
Jul 27 08:46:43 proxmox lxcfs[531]: - proc_slabinfo
Jul 27 08:46:43 proxmox lxcfs[531]: - shared_pidns
Jul 27 08:46:43 proxmox lxcfs[531]: - cpuview_daemon
Jul 27 08:46:43 proxmox lxcfs[531]: - loadavg_daemon
Jul 27 08:46:43 proxmox lxcfs[531]: - pidfds
Jul 27 08:46:43 proxmox lxcfs[531]: Ignoring invalid max threads value 4294967295 > max (100000).
root@proxmox:/var/log#
[/CODE]
Was Jul 27 08:27:58 when you manually started it? And was Jul 27 08:46:12 when you rebooted? Doesn't look strange otherwise, but maybe there is something in the full log?
 
Hi, sorry for lack of clarity there!
I just confirmed,

Code:
root@proxmox:~# uptime
 09:55:15 up  1:08,  1 user,  load average: 0.24, 0.21, 0.13
root@proxmox:~#

root@proxmox:~# date
Thu 27 Jul 2023 09:55:45 AM ADT
root@proxmox:~#

so the server was rebooted at 846am approx local time

I am pretty sure I had not rebooted it before this (ie, at 827am)



I am not sure if this really helps at all more:

Code:
root@proxmox:~# journalctl -p 3

...partially truncated unrelated stuff...

Jul 26 22:36:35 proxmox pveproxy[101139]: got inotify poll request in wrong process - disabling inotify
Jul 26 23:00:20 proxmox pvedaemon[93301]: VM 100 qmp command failed - VM 100 qmp command 'guest-ping' failed - got timeout
Jul 27 05:33:36 proxmox pveupdate[149837]: command 'apt-get update' failed: exit code 100
Jul 27 05:33:36 proxmox pveupdate[149832]: <root@pam> end task UPID:proxmox:0002494D:005E6AF0:64C22BDF:aptupdate::root@pam: command 'apt-get update' failed: exit code 100
Jul 27 07:09:28 proxmox pvedaemon[103263]: authentication failure; rhost=::ffff:192.168.143.115 user=root@pam msg=Authentication failure
Jul 27 07:59:16 proxmox pvedaemon[170323]: startup for container '101' failed
Jul 27 07:59:16 proxmox pvedaemon[95451]: <root@pam> end task UPID:proxmox:00029953:006BC105:64C24E03:vzstart:101:root@pam: startup for container '101' failed
Jul 27 08:04:24 proxmox pct[171349]: startup for container '101' failed
Jul 27 08:04:24 proxmox pct[171348]: <root@pam> end task UPID:proxmox:00029D55:006C392F:64C24F36:vzstart:101:root@pam: startup for container '101' failed
Jul 27 08:09:17 proxmox pct[172353]: startup for container '101' failed
Jul 27 08:09:17 proxmox pct[172352]: <root@pam> end task UPID:proxmox:0002A141:006CABEC:64C2505C:vzstart:101:root@pam: startup for container '101' failed
Jul 27 08:23:48 proxmox pct[175106]: startup for container '101' failed
Jul 27 08:23:49 proxmox pct[175105]: <root@pam> end task UPID:proxmox:0002AC02:006E002B:64C253C3:vzstart:101:root@pam: startup for container '101' failed
Jul 27 08:25:05 proxmox pct[175487]: startup for container '102' failed
Jul 27 08:25:05 proxmox pct[175486]: <root@pam> end task UPID:proxmox:0002AD7F:006E1E0B:64C25410:vzstart:102:root@pam: startup for container '102' failed
Jul 27 08:25:39 proxmox pvedaemon[175669]: startup for container '102' failed
Jul 27 08:25:39 proxmox pvedaemon[171973]: <root@pam> end task UPID:proxmox:0002AE35:006E2B52:64C25432:vzstart:102:root@pam: startup for container '102' failed
Jul 27 08:25:55 proxmox pct[175825]: startup for container '102' failed
Jul 27 08:25:55 proxmox pct[175824]: <root@pam> end task UPID:proxmox:0002AED1:006E31A4:64C25442:vzstart:102:root@pam: startup for container '102' failed
Jul 27 08:46:18 proxmox kernel: watchdog: watchdog0: watchdog did not stop!
-- Boot c581a022e30d48ba880d760de1ab83f1 --

If there is a specific log output/command you might suggest could give more info that is helpful to get a trace, please let me know?

Thanks,

Tim
 
Is there anything mentioning lxcfs in the output of journalctl -b-1, i.e. the log from the last boot. I'd expect it to have failed somewhere during start-up already, the part you posted is probably not early enough.
 
OK! Here is snip from output from

journalctl -b-1

which maybe is of relevant time and maybe of slight interest/use?

Code:
Jul 27 07:54:34 proxmox pvedaemon[164630]: <root@pam> successful auth for user 'root@pam'
Jul 27 07:54:54 proxmox pvedaemon[95451]: <root@pam> starting task UPID:proxmox:00029617:006B5B36:64C24CFE:download:debian-12-standard_12.0-1_amd64.tar.zst:root@pam:
Jul 27 07:55:06 proxmox pvedaemon[95451]: <root@pam> end task UPID:proxmox:00029617:006B5B36:64C24CFE:download:debian-12-standard_12.0-1_amd64.tar.zst:root@pam: OK
Jul 27 07:55:50 proxmox pveproxy[102503]: worker exit
Jul 27 07:55:50 proxmox pveproxy[10482]: worker 102503 finished
Jul 27 07:55:50 proxmox pveproxy[10482]: starting 1 worker(s)
Jul 27 07:55:50 proxmox pveproxy[10482]: worker 169645 started
Jul 27 07:57:42 proxmox pvedaemon[95451]: <root@pam> starting task UPID:proxmox:00029814:006B9CA3:64C24DA6:vzcreate:101:root@pam:
Jul 27 07:57:42 proxmox kernel: loop0: detected capacity change from 0 to 20971520
Jul 27 07:57:42 proxmox kernel: EXT4-fs (loop0): mounted filesystem 48d70550-cac0-451e-8e44-7dfffe387127 with ordered data mode. Quota mode: none.
Jul 27 07:57:44 proxmox systemd[1]: var-lib-lxc-101-rootfs.mount: Deactivated successfully.
Jul 27 07:57:47 proxmox kernel: EXT4-fs (loop0): unmounting filesystem 48d70550-cac0-451e-8e44-7dfffe387127.
Jul 27 07:57:47 proxmox pvedaemon[95451]: <root@pam> end task UPID:proxmox:00029814:006B9CA3:64C24DA6:vzcreate:101:root@pam: OK
Jul 27 07:59:15 proxmox pvedaemon[170323]: starting CT 101: UPID:proxmox:00029953:006BC105:64C24E03:vzstart:101:root@pam:
Jul 27 07:59:15 proxmox pvedaemon[95451]: <root@pam> starting task UPID:proxmox:00029953:006BC105:64C24E03:vzstart:101:root@pam:
Jul 27 07:59:15 proxmox systemd[1]: Created slice system-pve\x2dcontainer.slice - PVE LXC Container Slice.
Jul 27 07:59:15 proxmox systemd[1]: Started pve-container@101.service - PVE LXC Container: 101.
Jul 27 07:59:15 proxmox kernel: loop0: detected capacity change from 0 to 20971520
Jul 27 07:59:15 proxmox kernel: EXT4-fs (loop0): mounted filesystem 48d70550-cac0-451e-8e44-7dfffe387127 with ordered data mode. Quota mode: none.
Jul 27 07:59:16 proxmox audit[170347]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-101_</var/lib/lxc>" pid=170347 comm="apparmor_parser"
Jul 27 07:59:16 proxmox kernel: audit: type=1400 audit(1690455556.081:57): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-101_</var/lib/lxc>" pid=170347 >
Jul 27 07:59:16 proxmox kernel: vmbr0: port 2(fwpr101p0) entered blocking state
Jul 27 07:59:16 proxmox kernel: vmbr0: port 2(fwpr101p0) entered disabled state
Jul 27 07:59:16 proxmox kernel: device fwpr101p0 entered promiscuous mode
Jul 27 07:59:16 proxmox kernel: vmbr0: port 2(fwpr101p0) entered blocking state
Jul 27 07:59:16 proxmox kernel: vmbr0: port 2(fwpr101p0) entered forwarding state
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 27 07:59:16 proxmox kernel: device fwln101i0 entered promiscuous mode
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 2(veth101i0) entered blocking state
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 2(veth101i0) entered disabled state
Jul 27 07:59:16 proxmox kernel: device veth101i0 entered promiscuous mode
Jul 27 07:59:16 proxmox kernel: eth0: renamed from vethBaSLww
Jul 27 07:59:16 proxmox pvedaemon[170323]: startup for container '101' failed
Jul 27 07:59:16 proxmox pvedaemon[95451]: <root@pam> end task UPID:proxmox:00029953:006BC105:64C24E03:vzstart:101:root@pam: startup for container '101' failed
Jul 27 07:59:16 proxmox audit[170403]: AVC apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-101_</var/lib/lxc>" pid=170403 comm="apparmor_parser"
Jul 27 07:59:16 proxmox kernel: audit: type=1400 audit(1690455556.973:58): apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-101_</var/lib/lxc>" pid=17040>
Jul 27 07:59:17 proxmox pvedaemon[164630]: unable to get PID for CT 101 (not running?)
Jul 27 07:59:17 proxmox kernel: fwbr101i0: port 2(veth101i0) entered disabled state

and I have a few instances of this (more or less the same thing a few times for each failed start attempt I tried)




Tim
 
OK! Here is snip from output from

journalctl -b-1

which maybe is of relevant time and maybe of slight interest/use?

Code:
Jul 27 07:54:34 proxmox pvedaemon[164630]: <root@pam> successful auth for user 'root@pam'
Jul 27 07:54:54 proxmox pvedaemon[95451]: <root@pam> starting task UPID:proxmox:00029617:006B5B36:64C24CFE:download:debian-12-standard_12.0-1_amd64.tar.zst:root@pam:
Jul 27 07:55:06 proxmox pvedaemon[95451]: <root@pam> end task UPID:proxmox:00029617:006B5B36:64C24CFE:download:debian-12-standard_12.0-1_amd64.tar.zst:root@pam: OK
Jul 27 07:55:50 proxmox pveproxy[102503]: worker exit
Jul 27 07:55:50 proxmox pveproxy[10482]: worker 102503 finished
Jul 27 07:55:50 proxmox pveproxy[10482]: starting 1 worker(s)
Jul 27 07:55:50 proxmox pveproxy[10482]: worker 169645 started
Jul 27 07:57:42 proxmox pvedaemon[95451]: <root@pam> starting task UPID:proxmox:00029814:006B9CA3:64C24DA6:vzcreate:101:root@pam:
Jul 27 07:57:42 proxmox kernel: loop0: detected capacity change from 0 to 20971520
Jul 27 07:57:42 proxmox kernel: EXT4-fs (loop0): mounted filesystem 48d70550-cac0-451e-8e44-7dfffe387127 with ordered data mode. Quota mode: none.
Jul 27 07:57:44 proxmox systemd[1]: var-lib-lxc-101-rootfs.mount: Deactivated successfully.
Jul 27 07:57:47 proxmox kernel: EXT4-fs (loop0): unmounting filesystem 48d70550-cac0-451e-8e44-7dfffe387127.
Jul 27 07:57:47 proxmox pvedaemon[95451]: <root@pam> end task UPID:proxmox:00029814:006B9CA3:64C24DA6:vzcreate:101:root@pam: OK
Jul 27 07:59:15 proxmox pvedaemon[170323]: starting CT 101: UPID:proxmox:00029953:006BC105:64C24E03:vzstart:101:root@pam:
Jul 27 07:59:15 proxmox pvedaemon[95451]: <root@pam> starting task UPID:proxmox:00029953:006BC105:64C24E03:vzstart:101:root@pam:
Jul 27 07:59:15 proxmox systemd[1]: Created slice system-pve\x2dcontainer.slice - PVE LXC Container Slice.
Jul 27 07:59:15 proxmox systemd[1]: Started pve-container@101.service - PVE LXC Container: 101.
Jul 27 07:59:15 proxmox kernel: loop0: detected capacity change from 0 to 20971520
Jul 27 07:59:15 proxmox kernel: EXT4-fs (loop0): mounted filesystem 48d70550-cac0-451e-8e44-7dfffe387127 with ordered data mode. Quota mode: none.
Jul 27 07:59:16 proxmox audit[170347]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-101_</var/lib/lxc>" pid=170347 comm="apparmor_parser"
Jul 27 07:59:16 proxmox kernel: audit: type=1400 audit(1690455556.081:57): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-101_</var/lib/lxc>" pid=170347 >
Jul 27 07:59:16 proxmox kernel: vmbr0: port 2(fwpr101p0) entered blocking state
Jul 27 07:59:16 proxmox kernel: vmbr0: port 2(fwpr101p0) entered disabled state
Jul 27 07:59:16 proxmox kernel: device fwpr101p0 entered promiscuous mode
Jul 27 07:59:16 proxmox kernel: vmbr0: port 2(fwpr101p0) entered blocking state
Jul 27 07:59:16 proxmox kernel: vmbr0: port 2(fwpr101p0) entered forwarding state
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Jul 27 07:59:16 proxmox kernel: device fwln101i0 entered promiscuous mode
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 2(veth101i0) entered blocking state
Jul 27 07:59:16 proxmox kernel: fwbr101i0: port 2(veth101i0) entered disabled state
Jul 27 07:59:16 proxmox kernel: device veth101i0 entered promiscuous mode
Jul 27 07:59:16 proxmox kernel: eth0: renamed from vethBaSLww
Jul 27 07:59:16 proxmox pvedaemon[170323]: startup for container '101' failed
Jul 27 07:59:16 proxmox pvedaemon[95451]: <root@pam> end task UPID:proxmox:00029953:006BC105:64C24E03:vzstart:101:root@pam: startup for container '101' failed
Jul 27 07:59:16 proxmox audit[170403]: AVC apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-101_</var/lib/lxc>" pid=170403 comm="apparmor_parser"
Jul 27 07:59:16 proxmox kernel: audit: type=1400 audit(1690455556.973:58): apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-101_</var/lib/lxc>" pid=17040>
Jul 27 07:59:17 proxmox pvedaemon[164630]: unable to get PID for CT 101 (not running?)
Jul 27 07:59:17 proxmox kernel: fwbr101i0: port 2(veth101i0) entered disabled state

and I have a few instances of this (more or less the same thing a few times for each failed start attempt I tried)
This is probably also too late in the log. I'd suspect lxcfs was failing/prevented to start during boot of the host.
 
Ok - thank you. I think it is ok to park this topic unless I get any more issues here in which case I'll be sure to update with a current / relevant log.
The system seems to be working well since then so I am pretty sure it was a transitory issue.

Thanks!

Tim
 
This is probably also too late in the log. I'd suspect lxcfs was failing/prevented to start during boot of the host.

Completely same experience now, PVE on top of Debian 12 latest minimal install:

Code:
proxmox-ve: 8.1.0 (running kernel: 6.2.16-19-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5: 6.5.11-6
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2: 6.2.16-19
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx7
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.4
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.0.4-1
proxmox-backup-file-restore: 3.0.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.3
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: not correctly installed
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.2
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve4

Also:

Code:
# journalctl -b -1 -u lxcfs
-- No entries --

# journalctl -b -u lxcfs
Nov 30 08:15:17 echo systemd[1]: Started lxcfs.service - FUSE filesystem for LXC.
Nov 30 08:15:17 echo lxcfs[1435221]: Running constructor lxcfs_init to reload liblxcfs
Nov 30 08:15:17 echo lxcfs[1435221]: mount namespace: 5
Nov 30 08:15:17 echo lxcfs[1435221]: hierarchies:
Nov 30 08:15:17 echo lxcfs[1435221]:   0: fd:   6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Nov 30 08:15:17 echo lxcfs[1435221]: Kernel supports pidfds
Nov 30 08:15:17 echo lxcfs[1435221]: Kernel supports swap accounting
Nov 30 08:15:17 echo lxcfs[1435221]: api_extensions:
Nov 30 08:15:17 echo lxcfs[1435221]: - cgroups
Nov 30 08:15:17 echo lxcfs[1435221]: - sys_cpu_online
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_cpuinfo
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_diskstats
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_loadavg
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_meminfo
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_stat
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_swaps
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_uptime
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_slabinfo
Nov 30 08:15:17 echo lxcfs[1435221]: - shared_pidns
Nov 30 08:15:17 echo lxcfs[1435221]: - cpuview_daemon
Nov 30 08:15:17 echo lxcfs[1435221]: - loadavg_daemon
Nov 30 08:15:17 echo lxcfs[1435221]: - pidfds
Nov 30 08:15:17 echo lxcfs[1435221]: Ignoring invalid max threads value 4294967295 > max (100000).

Don't remember having this the previous time with PVE on top of Debian, the only difference is this time never created single LXC, just migrated a few in and tried starting them up.

Service was enabled but not running. The 8:15 is manual start, afterwards no problem starting up an LXC.
 
Last edited:
  • Like
Reactions: dengolius
Completely same experience now, PVE on top of Debian 12 latest minimal install:

Code:
proxmox-ve: 8.1.0 (running kernel: 6.2.16-19-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5: 6.5.11-6
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2: 6.2.16-19
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx7
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.4
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.0.4-1
proxmox-backup-file-restore: 3.0.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.3
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: not correctly installed
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.2
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve4

Also:

Code:
# journalctl -b -1 -u lxcfs
-- No entries --

# journalctl -b -u lxcfs
Nov 30 08:15:17 echo systemd[1]: Started lxcfs.service - FUSE filesystem for LXC.
Nov 30 08:15:17 echo lxcfs[1435221]: Running constructor lxcfs_init to reload liblxcfs
Nov 30 08:15:17 echo lxcfs[1435221]: mount namespace: 5
Nov 30 08:15:17 echo lxcfs[1435221]: hierarchies:
Nov 30 08:15:17 echo lxcfs[1435221]:   0: fd:   6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Nov 30 08:15:17 echo lxcfs[1435221]: Kernel supports pidfds
Nov 30 08:15:17 echo lxcfs[1435221]: Kernel supports swap accounting
Nov 30 08:15:17 echo lxcfs[1435221]: api_extensions:
Nov 30 08:15:17 echo lxcfs[1435221]: - cgroups
Nov 30 08:15:17 echo lxcfs[1435221]: - sys_cpu_online
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_cpuinfo
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_diskstats
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_loadavg
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_meminfo
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_stat
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_swaps
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_uptime
Nov 30 08:15:17 echo lxcfs[1435221]: - proc_slabinfo
Nov 30 08:15:17 echo lxcfs[1435221]: - shared_pidns
Nov 30 08:15:17 echo lxcfs[1435221]: - cpuview_daemon
Nov 30 08:15:17 echo lxcfs[1435221]: - loadavg_daemon
Nov 30 08:15:17 echo lxcfs[1435221]: - pidfds
Nov 30 08:15:17 echo lxcfs[1435221]: Ignoring invalid max threads value 4294967295 > max (100000).

Don't remember having this the previous time with PVE on top of Debian, the only difference is this time never created single LXC, just migrated a few in and tried starting them up.

Service was enabled but not running. The 8:15 is manual start, afterwards no problem starting up an LXC.
I've installed the latest version 8.1 yesterday and didn't saw this issue. It looks like a bug with starting systemd service which appears randomly...
 
I've installed the latest version 8.1 yesterday and didn't saw this issue. It looks like a bug with starting systemd service which appears randomly...

Well I wonder if there should not be at least another reboot in the official guide [1] after the proxmox-ve install itself?

In other words, could it be the service gets enabled upon install, but not started until next boot?

(Nothing suspicious in the boot logs at all.)

[1] https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
 
Hi,
Service was enabled but not running. The 8:15 is manual start, afterwards no problem starting up an LXC.
thank you for the report! Yes, there is a bug that prevents the automatic start of the service after first installation. This will be fixed in the next version of the package lxcfs=5.0.3-pve4.
 
  • Like
Reactions: esi_y

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!