[SOLVED] LXC failed to start - "Failed to run lxc.hook.pre-start for container"

20charlie02

New Member
Aug 9, 2022
5
0
1
I am having trouble getting LXC containers to start on a newly created proxmox node. This is the 3rd node in my small homelab cluster, I have set it up to run proxmox backup server alongside PVE (baremetal, not virtualized). I started by installing PVE using the ISO installer, then I added the PBS repositories to my sources.list file as instructed here. I then configured a ZFS pool from the backup server GUI, and configured PVE to use that same storage for VM/CT disks. VM's are running perfect while using the zpool as the storage for the disks, it is only containers having issues.

PVE and PBS are both fully up to date, PVE version 8.0, PBS version 3.0
(still had issues with PVE 7.4 and PBS 2.4, I was hoping updates would fix it)

lxc-start -n 116 -F -l DEBUG -o /tmp/lxc-116.log
lxc-start 116 20230628005116.170 INFO lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
lxc-start 116 20230628005116.170 INFO conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "116", config section "lxc"
lxc-start 116 20230628005116.601 DEBUG conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 116 lxc pre-start produced output: cannot open directory //zfs: No such file or directory
lxc-start 116 20230628005116.610 ERROR conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 2
lxc-start 116 20230628005116.610 ERROR start - ../src/lxc/start.c:lxc_init:844 - Failed to run lxc.hook.pre-start for container "116"
lxc-start 116 20230628005116.610 ERROR start - ../src/lxc/start.c:__lxc_start:2027 - Failed to initialize container "116"
lxc-start 116 20230628005116.610 INFO conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "116", config section "lxc"
lxc-start 116 20230628005117.112 INFO conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "116", config section "lxc"
lxc-start 116 20230628005117.513 DEBUG conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 116 lxc post-stop produced output: umount: /var/lib/lxc/116/rootfs: not mounted
lxc-start 116 20230628005117.513 DEBUG conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 116 lxc post-stop produced output: command 'umount --recursive -- /var/lib/lxc/116/rootfs' failed: exit code 1
lxc-start 116 20230628005117.522 ERROR conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 1
lxc-start 116 20230628005117.523 ERROR start - ../src/lxc/start.c:lxc_end:985 - Failed to run lxc.hook.post-stop for container "116"
lxc-start 116 20230628005117.523 ERROR lxc_start - ../src/lxc/tools/lxc_start.c:main:306 - The container failed to start
lxc-start 116 20230628005117.523 ERROR lxc_start - ../src/lxc/tools/lxc_start.c:main:311 - Additional information can be obtained by setting the --logfile and --logpriority options

pveversion -v
proxmox-ve: 8.0.1 (running kernel: 6.2.16-3-pve)
pve-manager: 8.0.3 (running version: 8.0.3/bbf3993334bfa916)
pve-kernel-6.2: 8.0.2
pve-kernel-5.15: 7.4-4
pve-kernel-6.2.16-3-pve: 6.2.16-3
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.5
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.1-1
proxmox-backup-file-restore: 3.0.1-1
proxmox-kernel-helper: 8.0.2
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.1
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.1
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.2
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.4
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

/etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso
shared 0

zfspool: zfs
pool zfs
content rootdir,images
mountpoint /zfs
sparse 1

lvmthin: data
thinpool data
vgname pve
content rootdir,images
nodes pbs

pbs: pbs
datastore zfs
server (removed for privacy)
content backup
encryption-key (removed for privacy)
fingerprint (removed for privacy)
prune-backups keep-all=1
username root@pam
 
Last edited:
Hi,
the container startup script complains that it cannot open directory //zfs: No such file or directory. Please share your container config by running pct config <VMID>
 
Hi,
I then configured a ZFS pool from the backup server GUI, and configured PVE to use that same storage for VM/CT disks.
likely not the cause of the issue, but I wouldn't use the same top-level directory for both, but at least create dedicated subdirectories (or even better ZFS sub-filesystems with zfs create).
 
  • Like
Reactions: Chris
likely not the cause of the issue, but I wouldn't use the same top-level directory for both, but at least create dedicated subdirectories (or even better ZFS sub-filesystems with zfs create).
Thank you both for your replies, I will look into restructuring my zpool if there is a more recommended configuration. here is the output of pct config 112:
root@pbs:~# pct config 112
arch: amd64
cores: 1
features: nesting=1
hostname: test
memory: 512
mp0: zfs:subvol-112-disk-0,mp=/mnt/zfs,backup=1,size=8G
net0: name=eth0,bridge=vmbr0,hwaddr=3E:49:95:60:73:AA,ip=dhcp,ip6=dhcp,type=veth
ostype: debian
rootfs: data:vm-112-disk-0,size=8G
swap: 512
unprivileged: 1

I have tried using the zpool for storing the root disk, as well as a mount point at /mnt/zfs, both produce the same errors.
 
Thank you both for your replies, I will look into restructuring my zpool if there is a more recommended configuration. here is the output of pct config 112:


I have tried using the zpool for storing the root disk, as well as a mount point at /mnt/zfs, both produce the same errors.
Strange, I got the same versions and a container with essentially the same configuration and don't get the error. Can you share the output of zfs get all zfs and zfs get all zfs/subvol-112-disk-0 (or if you changed the storage configuration, whatever the current path for the disk is)?

EDIT: Also, do you actually have an IPv6-capable DHCP server on your network? Otherwise, you should set IPv6 to static to avoid having Debian wait very long until it hits a timeout during boot (that is, if we can get the ZFS issue fixed ;)):
net0: name=eth0,bridge=vmbr0,hwaddr=3E:49:95:60:73:AA,ip=dhcp,ip6=dhcp,type=veth
 
Last edited:
Strange, I got the same versions and a container with essentially the same configuration and don't get the error. Can you share the output of zfs get all zfs and zfs get all zfs/subvol-112-disk-0 (or if you changed the storage configuration, whatever the current path for the disk is)?

zfs get all zfs
NAME PROPERTY VALUE SOURCE
zfs type filesystem -
zfs creation Fri Jun 23 20:11 2023 -
zfs used 1.55T -
zfs available 12.9T -
zfs referenced 1.41T -
zfs compressratio 1.03x -
zfs mounted yes -
zfs quota none default
zfs reservation none default
zfs recordsize 128K default
zfs mountpoint /mnt/datastore/zfs local
zfs sharenfs off default
zfs checksum on default
zfs compression on local
zfs atime on default
zfs devices on default
zfs exec on default
zfs setuid on default
zfs readonly off default
zfs zoned off default
zfs snapdir hidden default
zfs aclmode discard default
zfs aclinherit restricted default
zfs createtxg 1 -
zfs canmount on default
zfs xattr on default
zfs copies 1 default
zfs version 5 -
zfs utf8only off -
zfs normalization none -
zfs casesensitivity sensitive -
zfs vscan off default
zfs nbmand off default
zfs sharesmb off default
zfs refquota none default
zfs refreservation none default
zfs guid 4400060276748506668 -
zfs primarycache all default
zfs secondarycache all default
zfs usedbysnapshots 0B -
zfs usedbydataset 1.41T -
zfs usedbychildren 147G -
zfs usedbyrefreservation 0B -
zfs logbias latency default
zfs objsetid 54 -
zfs dedup off default
zfs mlslabel none default
zfs sync standard default
zfs dnodesize legacy default
zfs refcompressratio 1.02x -
zfs written 1.41T -
zfs logicalused 1.60T -
zfs logicalreferenced 1.44T -
zfs volmode default default
zfs filesystem_limit none default
zfs snapshot_limit none default
zfs filesystem_count none default
zfs snapshot_count none default
zfs snapdev hidden default
zfs acltype off default
zfs context none default
zfs fscontext none default
zfs defcontext none default
zfs rootcontext none default
zfs relatime on local
zfs redundant_metadata all default
zfs overlay on default
zfs encryption off default
zfs keylocation none default
zfs keyformat none default
zfs pbkdf2iters 0 default
zfs special_small_blocks 0 default

zfs get all zfs/subvol-112-disk-0
NAME PROPERTY VALUE SOURCE
zfs/subvol-112-disk-0 type filesystem -
zfs/subvol-112-disk-0 creation Fri Jun 30 8:56 2023 -
zfs/subvol-112-disk-0 used 96K -
zfs/subvol-112-disk-0 available 8.00G -
zfs/subvol-112-disk-0 referenced 96K -
zfs/subvol-112-disk-0 compressratio 1.00x -
zfs/subvol-112-disk-0 mounted yes -
zfs/subvol-112-disk-0 quota none default
zfs/subvol-112-disk-0 reservation none default
zfs/subvol-112-disk-0 recordsize 128K default
zfs/subvol-112-disk-0 mountpoint /mnt/datastore/zfs/subvol-112-disk-0 inherited from zfs
zfs/subvol-112-disk-0 sharenfs off default
zfs/subvol-112-disk-0 checksum on default
zfs/subvol-112-disk-0 compression on inherited from zfs
zfs/subvol-112-disk-0 atime on default
zfs/subvol-112-disk-0 devices on default
zfs/subvol-112-disk-0 exec on default
zfs/subvol-112-disk-0 setuid on default
zfs/subvol-112-disk-0 readonly off default
zfs/subvol-112-disk-0 zoned off default
zfs/subvol-112-disk-0 snapdir hidden default
zfs/subvol-112-disk-0 aclmode discard default
zfs/subvol-112-disk-0 aclinherit restricted default
zfs/subvol-112-disk-0 createtxg 113282 -
zfs/subvol-112-disk-0 canmount on default
zfs/subvol-112-disk-0 xattr sa local
zfs/subvol-112-disk-0 copies 1 default
zfs/subvol-112-disk-0 version 5 -
zfs/subvol-112-disk-0 utf8only off -
zfs/subvol-112-disk-0 normalization none -
zfs/subvol-112-disk-0 casesensitivity sensitive -
zfs/subvol-112-disk-0 vscan off default
zfs/subvol-112-disk-0 nbmand off default
zfs/subvol-112-disk-0 sharesmb off default
zfs/subvol-112-disk-0 refquota 8G local
zfs/subvol-112-disk-0 refreservation none default
zfs/subvol-112-disk-0 guid 638879370248034234 -
zfs/subvol-112-disk-0 primarycache all default
zfs/subvol-112-disk-0 secondarycache all default
zfs/subvol-112-disk-0 usedbysnapshots 0B -
zfs/subvol-112-disk-0 usedbydataset 96K -
zfs/subvol-112-disk-0 usedbychildren 0B -
zfs/subvol-112-disk-0 usedbyrefreservation 0B -
zfs/subvol-112-disk-0 logbias latency default
zfs/subvol-112-disk-0 objsetid 5657 -
zfs/subvol-112-disk-0 dedup off default
zfs/subvol-112-disk-0 mlslabel none default
zfs/subvol-112-disk-0 sync standard default
zfs/subvol-112-disk-0 dnodesize legacy default
zfs/subvol-112-disk-0 refcompressratio 1.00x -
zfs/subvol-112-disk-0 written 96K -
zfs/subvol-112-disk-0 logicalused 42K -
zfs/subvol-112-disk-0 logicalreferenced 42K -
zfs/subvol-112-disk-0 volmode default default
zfs/subvol-112-disk-0 filesystem_limit none default
zfs/subvol-112-disk-0 snapshot_limit none default
zfs/subvol-112-disk-0 filesystem_count none default
zfs/subvol-112-disk-0 snapshot_count none default
zfs/subvol-112-disk-0 snapdev hidden default
zfs/subvol-112-disk-0 acltype posix local
zfs/subvol-112-disk-0 context none default
zfs/subvol-112-disk-0 fscontext none default
zfs/subvol-112-disk-0 defcontext none default
zfs/subvol-112-disk-0 rootcontext none default
zfs/subvol-112-disk-0 relatime on inherited from zfs
zfs/subvol-112-disk-0 redundant_metadata all default
zfs/subvol-112-disk-0 overlay on default
zfs/subvol-112-disk-0 encryption off default
zfs/subvol-112-disk-0 keylocation none default
zfs/subvol-112-disk-0 keyformat none default
zfs/subvol-112-disk-0 pbkdf2iters 0 default
zfs/subvol-112-disk-0 special_small_blocks 0 default

I have IPv6 disabled for now, thanks for the suggestion. It was working for months then one day I started getting sync/timing issues in my modem logs, and my IPv6 dissapears from my router after few minutes... Xfinity will be sending a tech out... One thing at a time :)
 
Your mountpoint is wrong.
For the subvol it is /mnt/datastore/zfs while in your storage configuration it is /zfs. So as @fiona suggested, best is to setup a dedicated zfs dataset for PVE just on the pool.
 
Thank you Chris. That definitely seems like the better way to set things up for my use case. I will look into restructuring zfs this weekend.
 
Updating this to solved! I split my pool into two datasets, updated the mountpoint locations in /etc/pve/storage.cfg and everything started working. Thank you both for the help
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!