LXC Containers Not Starting After Update

UPD1
Code:
root@uyut-pve:~# cat /var/log/apt/history.log

....

Start-Date: 2021-12-30  15:57:57
Commandline: apt install make libc6-i386
Install: libc6-i386:amd64 (2.31-13+deb11u2), make:amd64 (4.3-4.1)
End-Date: 2021-12-30  15:58:15

Start-Date: 2021-12-30  16:00:27
Commandline: apt install net-tools
Install: net-tools:amd64 (1.60+git20181103.0eebece-1)
End-Date: 2021-12-30  16:00:29

Looks like the update wasn't the fault. two more packages were installed before the problem:
haspd_7.90-eter2debian_amd64.deb
haspd-modules_7.90-eter2debian_amd64.deb
 
panic :eek:
Same problem: after updating the host, none of the CT start.
Error:
Code:
safe_mount: 1200 Operation not permitted - Failed to mount "proc" onto "/usr/lib/x86_64-linux-gnu/lxc/rootfs/proc"
lxc_mount_auto_mounts: 810 Operation not permitted - Failed to mount "proc" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/proc" with flags 14
lxc_setup: 4356 Failed to setup first automatic mounts
do_start: 1274 Failed to setup container "100"
sync_wait: 34 An error occurred in another process (expected sequence number 3)
__lxc_start: 2068 Failed to spawn container "100"
TASK ERROR: startup for container '100' failed

But mount options are default. /etc/fstab :
Code:
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/rpool/data/subvol-101-disk-0/etc/fstab :
Code:
# UNCONFIGURED FSTAB FOR BASE SYSTEM

Brand new created unprivileged container is not working, created from official debian 11.0 template

What to do? need help!
thnx

PS. All VMs are working fine.
Same here PVE 7.1.9 I can't start unprivileged CTs even newly created won't start. Fstab on host is default and in ct is empty.
 
  • Like
Reactions: tamet83
I have the same problem.
I tried to set a new unprivileged lxc container and this is the error I receive:

Code:
run_buffer: 321 Script exited with status 1
lxc_setup: 4398 Failed to run autodev hooks
do_start: 1274 Failed to setup container "110"
sync_wait: 34 An error occurred in another process (expected sequence number 4)
__lxc_start: 2068 Failed to spawn container "110"
TASK ERROR: startup for container '110' failed

Where the lxc container info are:

Code:
arch: amd64
cores: 1
features: nesting=1
hostname: Test
memory: 1024
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=9A:38:5E:8C:88:96,ip=dhcp,type=veth
ostype: debian
rootfs: local-lvm:vm-110-disk-0,size=5G
swap: 512
unprivileged: 1

Wherease my node fstab is:

Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=A4D0-58A2 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

UUID=2dcc5b55-0ab4-4cf2-b41e-68ba58a83058 /mnt/pve/share ext4 defaults 0 0

Anyone find out how to solve this issue?
Thanks
 
Wonder if this could be helpful:

Code:
root@pve:~# pct start 110  --debug
run_buffer: 321 Script exited with status 1
lxc_setup: 4398 Failed to run autodev hooks
do_start: 1274 Failed to setup container "110"
sync_wait: 34 An error occurred in another process (expected sequence number 4)
__lxc_start: 2068 Failed to spawn container "110"
river AppArmor
INFO     conf - conf.c:run_script_argv:337 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "110", config section "lxc"
DEBUG    seccomp - seccomp.c:parse_config_v2:656 - Host native arch is [3221225534]
INFO     seccomp - seccomp.c:parse_config_v2:807 - Processing "reject_force_umount  # comment this to allow umount -f;  not recommended"
INFO     seccomp - seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - seccomp.c:do_resolve_add_rule:524 - Set seccomp rule to reject force umounts
INFO     seccomp - seccomp.c:parse_config_v2:807 - Processing "[all]"
INFO     seccomp - seccomp.c:parse_config_v2:807 - Processing "kexec_load errno 1"
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[246:kexec_load] action[327681:errno] arch[0]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741827]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[246:kexec_load] action[327681:errno] arch[1073741886]
INFO     seccomp - seccomp.c:parse_config_v2:807 - Processing "open_by_handle_at errno 1"
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[304:open_by_handle_at] action[327681:errno] arch[0]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741827]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[304:open_by_handle_at] action[327681:errno] arch[1073741886]
INFO     seccomp - seccomp.c:parse_config_v2:807 - Processing "init_module errno 1"
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[175:init_module] action[327681:errno] arch[0]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741827]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[175:init_module] action[327681:errno] arch[1073741886]
INFO     seccomp - seccomp.c:parse_config_v2:807 - Processing "finit_module errno 1"
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[313:finit_module] action[327681:errno] arch[0]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741827]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[313:finit_module] action[327681:errno] arch[1073741886]
INFO     seccomp - seccomp.c:parse_config_v2:807 - Processing "delete_module errno 1"
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[176:delete_module] action[327681:errno] arch[0]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741827]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[176:delete_module] action[327681:errno] arch[1073741886]
INFO     seccomp - seccomp.c:parse_config_v2:807 - Processing "ioctl errno 1 [1,0x9400,SCMP_CMP_MASKED_EQ,0xff00]"
INFO     seccomp - seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[16:ioctl] action[327681:errno] arch[0]
INFO     seccomp - seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[16:ioctl] action[327681:errno] arch[1073741827]
INFO     seccomp - seccomp.c:do_resolve_add_rule:547 - arg_cmp[0]: SCMP_CMP(1, 7, 65280, 37888)
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[16:ioctl] action[327681:errno] arch[1073741886]
INFO     seccomp - seccomp.c:parse_config_v2:807 - Processing "keyctl errno 38"
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding native rule for syscall[250:keyctl] action[327718:errno] arch[0]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[250:keyctl] action[327718:errno] arch[1073741827]
INFO     seccomp - seccomp.c:do_resolve_add_rule:564 - Adding compat rule for syscall[250:keyctl] action[327718:errno] arch[1073741886]
INFO     seccomp - seccomp.c:parse_config_v2:1017 - Merging compat seccomp contexts into main context
INFO     start - start.c:lxc_init:883 - Container "110" is initialized
INFO     cgfsng - cgroups/cgfsng.c:cgfsng_monitor_create:1028 - The monitor process uses "lxc.monitor/110" as cgroup
DEBUG    storage - storage/storage.c:storage_query:231 - Detected rootfs type "dir"
DEBUG    storage - storage/storage.c:storage_query:231 - Detected rootfs type "dir"
INFO     cgfsng - cgroups/cgfsng.c:cgfsng_payload_create:1136 - The container process uses "lxc/110/ns" as inner and "lxc/110" as limit cgroup
INFO     start - start.c:lxc_spawn:1759 - Cloned CLONE_NEWUSER
INFO     start - start.c:lxc_spawn:1759 - Cloned CLONE_NEWNS
INFO     start - start.c:lxc_spawn:1759 - Cloned CLONE_NEWPID
INFO     start - start.c:lxc_spawn:1759 - Cloned CLONE_NEWUTS
INFO     start - start.c:lxc_spawn:1759 - Cloned CLONE_NEWIPC
INFO     start - start.c:lxc_spawn:1759 - Cloned CLONE_NEWCGROUP
DEBUG    start - start.c:lxc_try_preserve_namespace:139 - Preserved user namespace via fd 17 and stashed path as user:/proc/27482/fd/17
DEBUG    start - start.c:lxc_try_preserve_namespace:139 - Preserved mnt namespace via fd 18 and stashed path as mnt:/proc/27482/fd/18
DEBUG    start - start.c:lxc_try_preserve_namespace:139 - Preserved pid namespace via fd 19 and stashed path as pid:/proc/27482/fd/19
DEBUG    start - start.c:lxc_try_preserve_namespace:139 - Preserved uts namespace via fd 20 and stashed path as uts:/proc/27482/fd/20
DEBUG    start - start.c:lxc_try_preserve_namespace:139 - Preserved ipc namespace via fd 21 and stashed path as ipc:/proc/27482/fd/21
DEBUG    start - start.c:lxc_try_preserve_namespace:139 - Preserved cgroup namespace via fd 22 and stashed path as cgroup:/proc/27482/fd/22
DEBUG    conf - conf.c:idmaptool_on_path_and_privileged:3511 - The binary "/usr/bin/newuidmap" does have the setuid bit set
DEBUG    conf - conf.c:idmaptool_on_path_and_privileged:3511 - The binary "/usr/bin/newgidmap" does have the setuid bit set
DEBUG    conf - conf.c:lxc_map_ids:3596 - Functional newuidmap and newgidmap binary found
INFO     cgfsng - cgroups/cgfsng.c:cgfsng_setup_limits:2828 - Limits for the unified cgroup hierarchy have been setup
DEBUG    conf - conf.c:idmaptool_on_path_and_privileged:3511 - The binary "/usr/bin/newuidmap" does have the setuid bit set
DEBUG    conf - conf.c:idmaptool_on_path_and_privileged:3511 - The binary "/usr/bin/newgidmap" does have the setuid bit set
INFO     conf - conf.c:lxc_map_ids:3594 - Caller maps host root. Writing mapping directly
NOTICE   utils - utils.c:lxc_drop_groups:1347 - Dropped supplimentary groups
INFO     start - start.c:do_start:1106 - Unshared CLONE_NEWNET
NOTICE   utils - utils.c:lxc_drop_groups:1347 - Dropped supplimentary groups
NOTICE   utils - utils.c:lxc_switch_uid_gid:1323 - Switched to gid 0
NOTICE   utils - utils.c:lxc_switch_uid_gid:1332 - Switched to uid 0
DEBUG    start - start.c:lxc_try_preserve_namespace:139 - Preserved net namespace via fd 5 and stashed path as net:/proc/27482/fd/5
INFO     conf - conf.c:run_script_argv:337 - Executing script "/usr/share/lxc/lxcnetaddbr" for container "110", config section "net"
DEBUG    network - network.c:netdev_configure_server_veth:851 - Instantiated veth tunnel "veth110i0 <--> veth3wLnDy"
DEBUG    conf - conf.c:lxc_mount_rootfs:1432 - Mounted rootfs "/var/lib/lxc/110/rootfs" onto "/usr/lib/x86_64-linux-gnu/lxc/rootfs" with options "(null)"
INFO     conf - conf.c:setup_utsname:875 - Set hostname to "Test"
DEBUG    network - network.c:setup_hw_addr:3807 - Mac address "9A:38:5E:8C:88:96" on "eth0" has been setup
DEBUG    network - network.c:lxc_network_setup_in_child_namespaces_common:3948 - Network device "eth0" has been setup
INFO     network - network.c:lxc_setup_network_in_child_namespaces:4005 - Finished setting up network devices with caller assigned names
INFO     conf - conf.c:mount_autodev:1215 - Preparing "/dev"
INFO     conf - conf.c:mount_autodev:1276 - Prepared "/dev"
DEBUG    conf - conf.c:lxc_mount_auto_mounts:735 - Invalid argument - Tried to ensure procfs is unmounted
DEBUG    conf - conf.c:lxc_mount_auto_mounts:758 - Invalid argument - Tried to ensure sysfs is unmounted
DEBUG    conf - conf.c:mount_entry:2412 - Remounting "/sys/fs/fuse/connections" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/fuse/connections" to respect bind or remount options
DEBUG    conf - conf.c:mount_entry:2431 - Flags for "/sys/fs/fuse/connections" were 4110, required extra flags are 14
DEBUG    conf - conf.c:mount_entry:2475 - Mounted "/sys/fs/fuse/connections" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/fuse/connections" with filesystem type "none"
DEBUG    conf - conf.c:mount_entry:2475 - Mounted "proc" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/dev/.lxc/proc" with filesystem type "proc"
DEBUG    conf - conf.c:mount_entry:2475 - Mounted "sys" on "/usr/lib/x86_64-linux-gnu/lxc/rootfs/dev/.lxc/sys" with filesystem type "sysfs"
DEBUG    cgfsng - cgroups/cgfsng.c:__cgroupfs_mount:1540 - Mounted cgroup filesystem cgroup2 onto 19((null))
INFO     conf - conf.c:run_script_argv:337 - Executing script "/usr/share/lxcfs/lxc.mount.hook" for container "110", config section "lxc"
INFO     conf - conf.c:run_script_argv:337 - Executing script "/usr/share/lxc/hooks/openvpn-auto-tun" for container "110", config section "lxc"
DEBUG    conf - conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/openvpn-auto-tun 110 lxc autodev produced output: mknod:
DEBUG    conf - conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/openvpn-auto-tun 110 lxc autodev produced output: net/tun
DEBUG    conf - conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/openvpn-auto-tun 110 lxc autodev produced output: : Operation not permitted
DEBUG    conf - conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/openvpn-auto-tun 110 lxc autodev produced output:

DEBUG    conf - conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/openvpn-auto-tun 110 lxc autodev produced output: chmod:
DEBUG    conf - conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/openvpn-auto-tun 110 lxc autodev produced output: cannot access 'net/tun'
DEBUG    conf - conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/openvpn-auto-tun 110 lxc autodev produced output: : No such file or directory
DEBUG    conf - conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/openvpn-auto-tun 110 lxc autodev produced output:

ERROR    conf - conf.c:run_buffer:321 - Script exited with status 1
ERROR    conf - conf.c:lxc_setup:4398 - Failed to run autodev hooks
ERROR    start - start.c:do_start:1274 - Failed to setup container "110"
ERROR    sync - sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 4)
DEBUG    network - network.c:lxc_delete_network:4159 - Deleted network devices
ERROR    start - start.c:__lxc_start:2068 - Failed to spawn container "110"
WARN     start - start.c:lxc_abort:1038 - No such process - Failed to send SIGKILL via pidfd 16 for process 27498
startup for container '110' failed
 
DEBUG conf - conf.c:run_buffer:310 - Script exec /usr/share/lxc/hooks/openvpn-auto-tun 110 lxc autodev produced output: mknod:
Since /usr/share/lxc/hooks/openvpn-auto-tun is not shipped with PVE I assume this is a third-party script?
(and would guess that the issue is that it has not been updated to work with the newer versions of PVE

I hope this helps!
 
Since /usr/share/lxc/hooks/openvpn-auto-tun is not shipped with PVE I assume this is a third-party script?
(and would guess that the issue is that it has not been updated to work with the newer versions of PVE

I hope this helps!
You're right, I didn't notice that problem from the log.

I ended up on doing a clean install of Proxmox and reimport all from backup.
I confirm that now everything works right, even creating new unprivileged container.

Thanks for your hint.
 
  • Like
Reactions: Stoiko Ivanov
Ok ;)
I just formated my machine with Proxmox v7.1-4 to solve all the problems caused by the latest v7.1-8 version.
Currently the server machine is not randomly rebooting again and all LXC and VM's are working with no problems.
im having the same issue looks like Proxmox 7.1.7 is having the same issue proxmox might have to patch it out
 
Not Sure if some of your administrator where mad and messed up all of the Proxmox PVE Images in your Website i would Like to recommend you check it out i have reinstalled Proxmox 7.0.11 and i have created a simple Virtual Machine just testing out and that version also have problems so i have created a virtual Machine with Gparted a simple Program and the VM display the following error, I would like to recomend for all of the images you have in your website to be checked because you are distributing to the public a hypervisor with problems i would test it out tomorrow in a different host Machine ill let you know
 

Attachments

  • GPARTED.PNG
    GPARTED.PNG
    3.1 KB · Views: 9
Not Sure if some of your administrator where mad and messed up all of the Proxmox PVE Images in your Website i would Like to recommend you check it out i have reinstalled Proxmox 7.0.11 and i have created a simple Virtual Machine just testing out and that version also have problems so i have created a
The ISO images on our sources (download.proxmox.com and www.proxmox.com) have not changed.

* the screen shot you shared would indicate that the machine (not sure if you're speaking about a VM, or if this install is on bare-metal) does not support KVM
** If it's a VM make sure to enable nested virtualization https://pve.proxmox.com/wiki/Nested_Virtualization
** If it's hardware - check the BIOS settings for anything related to virtualization

Also this thread is on quite a different topic (LXC containers) - so if the tips above do not solve your issue please open a new thread
 
The ISO images on our sources (download.proxmox.com and www.proxmox.com) have not changed.

* the screen shot you shared would indicate that the machine (not sure if you're speaking about a VM, or if this install is on bare-metal) does not support KVM
** If it's a VM make sure to enable nested virtualization https://pve.proxmox.com/wiki/Nested_Virtualization
** If it's hardware - check the BIOS settings for anything related to virtualization

Also this thread is on quite a different topic (LXC containers) - so if the tips above do not solve your issue please open a new thread
Thanks i will but is not related to bare metal whats is in isntalled on bare metal is Proxmox any VM fresh VM i create with fresh images have the same result and Error Message , i believe is not supposed to happen if that version of Proxmox was tested Before Releasing to the Public , Lets see if i can do some recording later so you guys can see
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!