Can't start a restored LXC / CT on new host. "error occurred in another process (expected sequence number 4)"

jaxjexjox

Member
Mar 25, 2022
53
0
11
Prox 8.1.3 on host 1.
Prox 8.1.3 on host 2.
Prox Backup Server on another system


I backed up a VM from host 1 to PBS. Restored to host 2, worked amazing, slick! Very happy with this.


Tried the same thing with a container but it refuses to start, tad frustrating.

explicitly configured lxc.apparmor.profile overrides the following settings: features:nesting
run_buffer: 322 Script exited with status 127
lxc_setup: 4445 Failed to run autodev hooks
do_start: 1272 Failed to setup container "103"
sync_wait: 34 An error occurred in another process (expected sequence number 4)
__lxc_start: 2107 Failed to spawn container "103"
TASK ERROR: startup for container '103' failed

Host 2 has been rebooted (one suggestion I found)
Host 2 hasn't been 'fiddled with' it's fairly stock.

nesting = 1 is set for both host 1 version of the container and host 2, the restore included that flag, which I believe I need. (lxc works fine on host 1)

103.conf (again on both systems is identical, due to backup / restore)

lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.autodev: 1
lxc.hook.autodev: /var/lib/lxc/103/mount_hook.sh




Ran aa-status as per other thread suggestions

root@proxmox8500:/etc/pve/lxc# aa-status
apparmor module is loaded.
18 profiles are loaded.
18 profiles are in enforce mode.
/usr/bin/lxc-start
/usr/bin/man
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/chronyd
/{,usr/}sbin/dhclient
lsb_release
lxc-container-default
lxc-container-default-cgns
lxc-container-default-with-mounting
lxc-container-default-with-nesting
man_filter
man_groff
nvidia_modprobe
nvidia_modprobe//kmod
swtpm
tcpdump
0 profiles are in complain mode.
0 profiles are in kill mode.
0 profiles are in unconfined mode.
2 processes have profiles defined.
2 processes are in enforce mode.
/usr/sbin/chronyd (857)
/usr/sbin/chronyd (858)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
0 processes are in mixed mode.
0 processes are in kill mode.
root@proxmox8500:/etc/pve/lxc#


Not sure where to go from here.
 
More information:

Bad system:
pct start 103 --debug 1
explicitly configured lxc.apparmor.profile overrides the following settings: features:nesting
run_buffer: 322 Script exited with status 127


Good system:
pct start 103 --debug 1

explicitly configured lxc.apparmor.profile overrides the following settings: features:nesting
INFO lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "103", config section "lxc"
(Log continues and it loads fine)


Problem seems to be apparmor?
I do notice one entry is missing from aa-status on new system, vs old.


Good system:
aa-status
apparmor module is loaded.
19 profiles are loaded.
19 profiles are in enforce mode.
/usr/bin/lxc-start
/usr/bin/man
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/chronyd
/{,usr/}sbin/dhclient
docker-default
lsb_release
lxc-container-default
lxc-container-default-cgns
lxc-container-default-with-mounting
lxc-container-default-with-nesting
man_filter
man_groff
nvidia_modprobe
nvidia_modprobe//kmod
swtpm
tcpdump
0 profiles are in complain mode.
0 profiles are in kill mode.
0 profiles are in unconfined mode.
3 processes have profiles defined.
3 processes are in enforce mode.
/usr/bin/lxc-start (2520603)
/usr/sbin/chronyd (923)
/usr/sbin/chronyd (930)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
0 processes are in mixed mode.
0 processes are in kill mode.


Bad system:
aa-status
root@proxmox8500:/etc/pve/lxc# aa-status
apparmor module is loaded.
18 profiles are loaded.
18 profiles are in enforce mode.
/usr/bin/lxc-start
/usr/bin/man
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/lib/NetworkManager/nm-dhcp-helper
/usr/lib/connman/scripts/dhclient-script
/usr/sbin/chronyd
/{,usr/}sbin/dhclient
lsb_release
lxc-container-default
lxc-container-default-cgns
lxc-container-default-with-mounting
lxc-container-default-with-nesting
man_filter
man_groff
nvidia_modprobe
nvidia_modprobe//kmod
swtpm
tcpdump
0 profiles are in complain mode.
0 profiles are in kill mode.
0 profiles are in unconfined mode.
2 processes have profiles defined.
2 processes are in enforce mode.
/usr/sbin/chronyd (857)
/usr/sbin/chronyd (858)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
0 processes are in mixed mode.
0 processes are in kill mode.


19 vs 18 entries. How do I add docker-default to apparmor?
 
I still have this issue, does anyone have any ideas?
It looks like I need to add docker-default to apparmor and I'm unsure how to do so.

The conf files for the container are identical, I'm guessing it's something small on the proxmox host?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!