pve-container 6.1.0 startup for scratch container failed

zhouu

Active Member
Aug 12, 2020
1
0
41
31
oci: docker.io/kanidm/server:1.8.5

Code:
pct start 1003 --debug
run_buffer: 571 Script exited with status 255
lxc_init: 845 Failed to run lxc.hook.pre-start for container "1003"
__lxc_start: 2046 Failed to initialize container "1003"
ook" for container "1003", config section "lxc"
DEBUG    utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 1003 lxc pre-start produced output: Can't locate object method "check_systemd_nesting" via package "PVE::LXC::Setup::Unmanaged" at /usr/share/perl5/PVE/LXC/Setup.pm line 304.

ERROR    utils - ../src/lxc/utils.c:run_buffer:571 - Script exited with status 255
ERROR    start - ../src/lxc/start.c:lxc_init:845 - Failed to run lxc.hook.pre-start for container "1003"
ERROR    start - ../src/lxc/start.c:__lxc_start:2046 - Failed to initialize container "1003"
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "1003", config section "lxc"
startup for container '1003' failed

Code:
arch: amd64
cmode: console
cores: 4
entrypoint: /sbin/kanidmd server
features: nesting=1
hostname: kanidmd
memory: 4096
mp0: /srv/kanidmd,mp=/data
net0: name=eth0,bridge=vmbr2,gw=10.0.0.1,host-managed=1,hwaddr=BC:24:11:59:2D:F7,ip=10.0.0.6/24,type=veth
onboot: 1
ostype: unmanaged
rootfs: local-lvm:vm-1003-disk-0,size=1G
swap: 0
unprivileged: 0
lxc.environment.runtime: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
lxc.environment.runtime: LD_LIBRARY_PATH=/lib
lxc.environment.runtime: RUST_BACKTRACE=1
lxc.init.cwd: /data
lxc.signal.halt: SIGTERM
Code:
proxmox-ve: 9.1.0 (running kernel: 6.17.4-2-pve)
pve-manager: 9.1.5 (running version: 9.1.5/80cf92a64bef6889)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.4-2-pve-signed: 6.17.4-2
proxmox-kernel-6.17: 6.17.4-2
proxmox-kernel-6.14: 6.14.11-5
proxmox-kernel-6.14.11-5-pve: 6.14.11-5
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx11
intel-microcode: 3.20251111.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.2
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.5
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.7
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.5
libpve-rs-perl: 0.11.4
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-4
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
openvswitch-switch: 3.5.0-1+b1
proxmox-backup-client: 4.1.2-1
proxmox-backup-file-restore: 4.1.2-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.5
pve-cluster: 9.0.7
pve-container: 6.1.0
pve-docs: 9.1.2
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.1.0
pve-i18n: 3.6.6
pve-qemu-kvm: 10.1.2-5
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.4
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1

The pve-container can start after being downgraded to version 6.0.18.
 
I had the same issue just now but managed to start the container with pve-container 6.1.0

Edit /etc/pve/lxc/1003.conf

At line 'ostype:' replace 'unmanaged' with the name of your distribution.
 
I have a similar issue but I don't think the patch will fix it, it's related to propagating UID and GID on a mountpoint. Had issues on 6.1.0, downgrading to 6.0.18 fixed it. Logs:

Code:
run_buffer: 571 Script exited with status 1
lxc_init: 845 Failed to run lxc.hook.pre-start for container "115"
__lxc_start: 2046 Failed to initialize container "115"
d 0 hostid 100000 range 10000
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2295 - Read uid map: type u nsid 10000 hostid 10000 range 1
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2295 - Read uid map: type g nsid 10000 hostid 10000 range 1
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2295 - Read uid map: type u nsid 10001 hostid 110001 range 55535
INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2295 - Read uid map: type g nsid 10001 hostid 110001 range 55535
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "115", config section "lxc"
DEBUG    utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 115 lxc pre-start produced output:
failed to propagate uid and gid to mountpoint: Operation not permitted
ERROR    utils - ../src/lxc/utils.c:run_buffer:571 - Script exited with status 1
ERROR    start - ../src/lxc/start.c:lxc_init:845 - Failed to run lxc.hook.pre-start for container "115"
ERROR    start - ../src/lxc/start.c:__lxc_start:2046 - Failed to initialize container "115"
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "115", config section "lxc"
startup for container '115' failed

My lxc conf:
Code:
arch: amd64
cores: 2
hostname: jellyfin
memory: 2048
mp0: /mnt/media,mp=/opt/media
onboot: 1
ostype: debian
rootfs: local-btrfs:115/vm-115-disk-0.raw,size=25G
startup: order=3
swap: 512
unprivileged: 1
lxc.idmap: u 0 100000 10000
lxc.idmap: g 0 100000 10000
lxc.idmap: u 10000 10000 1
lxc.idmap: g 10000 10000 1
lxc.idmap: u 10001 110001 55535
lxc.idmap: g 10001 110001 55535

I have /etc/subuid and /etc/subgid set such that this normally works as I mentioned. I also have nesting disabled and the LXC is unprivileged if that changes anything, I recently fixed some issues with some systemd services failing to start in an unnested unpriv'ed LXC and now I see warnings when starting those containers, but they work fine.
 
Last edited:
I have a similar issue but I don't think the patch will fix it, it's related to propagating UID and GID on a mountpoint. Had issues on 6.1.0, downgrading to 6.0.18 fixed it. Logs:
Yes, the problem from your log is different from the one that is fixed by the linked patches, but there is already a Bugzilla report for this [0] and this will be fixed by a separate patch set.

[0] https://bugzilla.proxmox.com/show_bug.cgi?id=7271
 
I too have issues with pve-container=6.1.0 related to host mounts. In my case, it is not related to NFS or read-only mounts. I wonder if this is the same issue or something else entirely.

This issue is that the ownership of mp root directories is reset to 100000 every time the LXC is restarted.

LXC.conf:

Code:
arch: amd64
features: nesting=1,fuse=1
hostname: gw
memory: 10240
mp0: /tank/grossweber/harbor/app,mp=/data/harbor/app,mountoptions=discard;noatime
mp1: /tank/grossweber/harbor/postgres,mp=/data/harbor/app/database,mountoptions=discard;noatime
mp2: /tank/grossweber/harbor/redis,mp=/data/harbor/app/redis,mountoptions=discard;noatime
net0: name=eth0
onboot: 1
ostype: fedora
protection: 1
rootfs: local-zfs:subvol-4000-disk-0,mountoptions=discard;noatime,size=60G
startup: order=300
swap: 0
tags: gw
timezone: host
unprivileged: 1
lxc.cap.drop: sys_rawio

Shell log, we're starting at pve-container-6.0.8:

code_language.shell:
$ sudo apt install pve-container=6.1.0
Upgrading:
  pve-container

Summary:
  Upgrading: 1, Installing: 0, Removing: 0, Not Upgrading: 0
  Download size: 156 kB
  Space needed: 5,120 B / 596 GB available

Get:1 http://download.proxmox.com/debian/pve trixie/pve-no-subscription amd64 pve-container all 6.1.0 [156 kB]
...
Processing triggers for pve-manager (9.1.5) ...

$ ls /tank/grossweber/harbor/
total 19K
drwxr-xr-x 5 root   root   5 Feb  5 09:20 .
drwxr-xr-x 6 root   root   6 Mar 29  2025 ..
drwxr-xr-x 9 110000 110000 9 Mar 27  2025 app
drwxr-xr-x 3 100999 100999 3 Jun 18  2024 postgres
drwxr-xr-x 2 100999 100999 3 Feb  5 13:32 redis

# reboot lxc, now every mp root is reset to owner 100000

$ ls /tank/grossweber/harbor/
total 19K
drwxr-xr-x 5 root   root   5 Feb  5 09:20 .
drwxr-xr-x 6 root   root   6 Mar 29  2025 ..
drwxr-xr-x 9 100000 100000 9 Mar 27  2025 app
drwxr-xr-x 3 100000 100000 3 Jun 18  2024 postgres
drwxr-xr-x 2 100000 100000 3 Feb  5 13:37 redis

# reset ownership using an external script

$ ls /tank/grossweber/harbor/
total 19K
drwxr-xr-x 5 root   root   5 Feb  5 09:20 .
drwxr-xr-x 6 root   root   6 Mar 29  2025 ..
drwxr-xr-x 9 110000 110000 9 Mar 27  2025 app
drwxr-xr-x 3 100999 100999 3 Jun 18  2024 postgres
drwxr-xr-x 2 100999 100999 3 Feb  5 13:37 redis

# reboot lxc, ownership again was reset to 100000

$ ls /tank/grossweber/harbor/
total 19K
drwxr-xr-x 5 root   root   5 Feb  5 09:20 .
drwxr-xr-x 6 root   root   6 Mar 29  2025 ..
drwxr-xr-x 9 100000 100000 9 Mar 27  2025 app
drwxr-xr-x 3 100000 100000 3 Jun 18  2024 postgres
drwxr-xr-x 2 100000 100000 3 Feb  5 13:38 redis
 
  • Like
Reactions: ch888
Quite the same here. Unprivileged LXC on LVM with additional mp are in trouble.
The workaround suggested in other threads (just downgrade pve-container package) alone did not solve the issue.
I also had to run chown on each mp in all LXC.