LXC container failing to start(322、844、2027)

LeeSL

New Member
Mar 3, 2023
5
0
1
大家好,我的LXC容器启动不了,能帮我看看吗,非常感谢

lxc-start -n 103 -F -lDEBUG -o lxc-103.log
explicitly configured lxc.apparmor.profile overrides the following settings: features:fuse, features:nesting, features:mount run_buffer: 322 Script exited with status 1 lxc_init: 844 Failed to run lxc.hook.pre-start for container "103" __lxc_start: 2027 Failed to initialize container "103" TASK ERROR: startup for container '103' failed

pve_103.config
arch: amd64 cores: 4 features: fuse=1,mount=nfs;cifs,nesting=1 hostname: docker memory: 4096 mp0: /mnt/pve/nas,mp=/mnt/nas net0: name=eth0,bridge=vmbr0,hwaddr=9E:8F:1B:5F:2C:18,ip=dhcp,type=veth onboot: 1 ostype: debian rootfs: local-lvm:vm-103-disk-0,size=20G swap: 2048 lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file lxc.apparmor.profile: unconfined lxc.cgroup.devices.allow: a lxc.cap.drop:

pve_render_device_hook.sh
mkdir -p 103/dev/dri mknod -m 666 103/dev/dri/card0 c 226 0 mknod -m 666 103/dev/dri/renderD128 c 226 128

pct config 103
arch: amd64 cores: 4 features: fuse=1,mount=nfs;cifs,nesting=1 hostname: docker memory: 4096 mp0: /mnt/pve/nas,mp=/mnt/nas net0: name=eth0,bridge=vmbr0,hwaddr=9E:8F:1B:5F:2C:18,ip=dhcp,type=veth onboot: 1 ostype: debian rootfs: local-lvm:vm-103-disk-0,size=20G swap: 2048 lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file lxc.apparmor.profile: unconfined lxc.cgroup.devices.allow: a lxc.cap.drop:

dh-f
Filesystem Size Used Avail Use% Mounted on udev 7.7G 0 7.7G 0% /dev tmpfs 1.6G 1.6M 1.6G 1% /run /dev/mapper/pve-root 59G 20G 36G 36% / tmpfs 7.8G 46M 7.7G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/nvme1n1p2 511M 328K 511M 1% /boot/efi /dev/fuse 128M 20K 128M 1% /etc/pve 10.0.0.232:/volume1/video 454G 402G 53G 89% /mnt/pve/nas tmpfs 1.6G 0 1.6G 0% /run/user/0

vgdisplay
--- Volume group --- VG Name pve System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 296 VG Access read/write VG Status resizable MAX LV 0 Cur LV 8 Open LV 6 Max PV 0 Cur PV 1 Act PV 1 VG Size 237.97 GiB PE Size 4.00 MiB Total PE 60921 Alloc PE / Size 56827 / 221.98 GiB Free PE / Size 4094 / 15.99 GiB VG UUID 7eaRjL-lDBk-lTQs-EWvJ-OUh0-VAtZ-QAEKcd --- Volume group --- VG Name vg1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size <472.33 GiB PE Size 4.00 MiB Total PE 120916 Alloc PE / Size 120835 / 472.01 GiB Free PE / Size 81 / 324.00 MiB VG UUID YeJZc5-rtdo-8Ro4-cda0-RUtn-N4AS-SfgtqU

lvs

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-aotz-- <151.63g 43.62 2.51 root pve -wi-ao---- 59.25g swap pve -wi-ao---- 8.00g vm-100-disk-0 pve Vwi-aotz-- 548.00m data 99.33 vm-101-disk-0 pve Vwi-a-tz-- 60.00g data 27.84 vm-102-disk-0 pve Vwi-aotz-- 128.00m data 51.95 vm-102-disk-1 pve Vwi-aotz-- 30.00g data 99.90 vm-103-disk-0 pve Vwi-a-tz-- 20.00g data 94.32 syno_vg_reserved_area vg1 -wi-a----- 12.00m volume_1 vg1 -wi-a----- 472.00g
 
Last edited:
You are mixing lxc.cgroup and lxc.cgroup2 settings. Why are you doing that? Maybe that is the problem?
N5105 core display is directly connected to LXC for EMBY.Although the nuclear display is not effective, it has been running stable for half a year. I did not make any modifications on LXC and PVE. It was only a few days that I couldn't start it.
 
N5105 core display is directly connected to LXC for EMBY.Although the nuclear display is not effective, it has been running stable for half a year. I did not make any modifications on LXC and PVE. It was only a few days that I couldn't start it.
pve_103.config
arch: amd64 cores: 4 features: fuse=1,mount=nfs;cifs,nesting=1 hostname: docker memory: 4096 mp0: /mnt/pve/nas,mp=/mnt/nas net0: name=eth0,bridge=vmbr0,hwaddr=9E:8F:1B:5F:2C:18,ip=dhcp,type=veth onboot: 1 ostype: debian rootfs: local-lvm:vm-103-disk-0,size=20G swap: 2048 lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file lxc.apparmor.profile: unconfined lxc.cgroup.devices.allow: a lxc.cap.drop:
 
You are mixing lxc.cgroup and lxc.cgroup2 settings. Why are you doing that? Maybe that is the problem?
render_device_hook.sh
mkdir -p 103/dev/dri mknod -m 666 103/dev/dri/card0 c 226 0 mknod -m 666 103/dev/dri/renderD128 c 226 128
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!