pve-container 6.1.0 startup for scratch container failed

zhouu

Active Member
Aug 12, 2020
1
0
41
31
oci: docker.io/kanidm/server:1.8.5

Code:
pct start 1003 --debug
run_buffer: 571 Script exited with status 255
lxc_init: 845 Failed to run lxc.hook.pre-start for container "1003"
__lxc_start: 2046 Failed to initialize container "1003"
ook" for container "1003", config section "lxc"
DEBUG    utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 1003 lxc pre-start produced output: Can't locate object method "check_systemd_nesting" via package "PVE::LXC::Setup::Unmanaged" at /usr/share/perl5/PVE/LXC/Setup.pm line 304.

ERROR    utils - ../src/lxc/utils.c:run_buffer:571 - Script exited with status 255
ERROR    start - ../src/lxc/start.c:lxc_init:845 - Failed to run lxc.hook.pre-start for container "1003"
ERROR    start - ../src/lxc/start.c:__lxc_start:2046 - Failed to initialize container "1003"
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "1003", config section "lxc"
startup for container '1003' failed

Code:
arch: amd64
cmode: console
cores: 4
entrypoint: /sbin/kanidmd server
features: nesting=1
hostname: kanidmd
memory: 4096
mp0: /srv/kanidmd,mp=/data
net0: name=eth0,bridge=vmbr2,gw=10.0.0.1,host-managed=1,hwaddr=BC:24:11:59:2D:F7,ip=10.0.0.6/24,type=veth
onboot: 1
ostype: unmanaged
rootfs: local-lvm:vm-1003-disk-0,size=1G
swap: 0
unprivileged: 0
lxc.environment.runtime: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
lxc.environment.runtime: LD_LIBRARY_PATH=/lib
lxc.environment.runtime: RUST_BACKTRACE=1
lxc.init.cwd: /data
lxc.signal.halt: SIGTERM
Code:
proxmox-ve: 9.1.0 (running kernel: 6.17.4-2-pve)
pve-manager: 9.1.5 (running version: 9.1.5/80cf92a64bef6889)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.4-2-pve-signed: 6.17.4-2
proxmox-kernel-6.17: 6.17.4-2
proxmox-kernel-6.14: 6.14.11-5
proxmox-kernel-6.14.11-5-pve: 6.14.11-5
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx11
intel-microcode: 3.20251111.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.2
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.5
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.7
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.5
libpve-rs-perl: 0.11.4
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-4
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
openvswitch-switch: 3.5.0-1+b1
proxmox-backup-client: 4.1.2-1
proxmox-backup-file-restore: 4.1.2-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.5
pve-cluster: 9.0.7
pve-container: 6.1.0
pve-docs: 9.1.2
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.1.0
pve-i18n: 3.6.6
pve-qemu-kvm: 10.1.2-5
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.4
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1

The pve-container can start after being downgraded to version 6.0.18.
 
I had the same issue just now but managed to start the container with pve-container 6.1.0

Edit /etc/pve/lxc/1003.conf

At line 'ostype:' replace 'unmanaged' with the name of your distribution.
 
I have a similar issue but I don't think the patch will fix it, it's related to propagating UID and GID on a mountpoint. Had issues on 6.1.0, downgrading to 6.0.18 fixed it. Logs:

Code:
run_buffer: 571 Script exited with status 1
lxc_init: 845 Failed to run lxc.hook.pre-start for container "115"
__lxc_start: 2046 Failed to initialize container "115"
d 0 hostid 100000 range 10000
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2295 - Read uid map: type u nsid 10000 hostid 10000 range 1
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2295 - Read uid map: type g nsid 10000 hostid 10000 range 1
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2295 - Read uid map: type u nsid 10001 hostid 110001 range 55535
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2295 - Read uid map: type g nsid 10001 hostid 110001 range 55535
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "115", config section "lxc"
DEBUG    utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 115 lxc pre-start produced output:
failed to propagate uid and gid to mountpoint: Operation not permitted
ERROR    utils - ../src/lxc/utils.c:run_buffer:571 - Script exited with status 1
ERROR    start - ../src/lxc/start.c:lxc_init:845 - Failed to run lxc.hook.pre-start for container "115"
ERROR    start - ../src/lxc/start.c:__lxc_start:2046 - Failed to initialize container "115"
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "115", config section "lxc"
startup for container '115' failed

My lxc conf:
Code:
arch: amd64
cores: 2
hostname: jellyfin
memory: 2048
mp0: /mnt/media,mp=/opt/media
onboot: 1
ostype: debian
rootfs: local-btrfs:115/vm-115-disk-0.raw,size=25G
startup: order=3
swap: 512
unprivileged: 1
lxc.idmap: u 0 100000 10000
lxc.idmap: g 0 100000 10000
lxc.idmap: u 10000 10000 1
lxc.idmap: g 10000 10000 1
lxc.idmap: u 10001 110001 55535
lxc.idmap: g 10001 110001 55535

I have /etc/subuid and /etc/subgid set such that this normally works as I mentioned. I also have nesting disabled and the LXC is unprivileged if that changes anything, I recently fixed some issues with some systemd services failing to start in an unnested unpriv'ed LXC and now I see warnings when starting those containers, but they work fine.
 
Last edited: