How to delay an LXC container until NAS is ready?

SGr33n

New Member
Nov 23, 2023
6
0
1
Hi people,
I just followed the tutorial on https://forum.proxmox.com/threads/tutorial-unprivileged-lxcs-mount-cifs-shares.101795/ about how to mount a NAS on an unprivileged container. Everything is ok and it's going pretty well until I have to restart the host. Infact the NAS is a TrueNAS and it's running on the same machine of LXC containers. When the host is starting LXC containers using the mount point will not start because TrueNAS is not ready yet. I already had this kind of issue on Docker a few years ago and I solved customizing the docker service and I cannot understand how to manage this on Proxmox.
What's the best practice to solve this? I suppose I could use lxc.start.delay = 60 but maybe there is a best way? just in case the NAS, for some reason, is starting slowly and it is not ready in 60 seconds? Is there a way, as on Docker, to start the LXC only when the NAS is ready?

Thanks :)

Code:
proxmox-ve: 8.2.0 (running kernel: 6.8.12-4-pve)
pve-manager: 8.2.9 (running version: 8.2.9/98c7f34632fee424)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-4
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
ceph-fuse: 17.2.7-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
intel-microcode: 3.20240910.1~deb12u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.12
libpve-storage-perl: 8.2.8
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.2.9-1
proxmox-backup-file-restore: 3.2.9-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.3.1
pve-cluster: 8.0.10
pve-container: 5.2.2
pve-docs: 8.2.4
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.0.7
pve-firmware: 3.14-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.4
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1

TrueNAS VM:
Code:
agent: 1,fstrim_cloned_disks=1
balloon: 0
boot: order=scsi0;net0
cores: 4
cpu: x86-64-v2-AES
hostpci0: 0000:00:17.0
machine: q35
memory: 8192
meta: creation-qemu=8.0.2,ctime=1691483063
name: TrueNAS
net0: virtio=8A:0A:F9:B1:71:53,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-100-disk-0,discard=on,iothread=1,size=32G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=979af2d1-e4b8-4c4e-8d38-a854bae8be68
sockets: 1
startup: order=1
vmgenid: d067e4d4-fdbc-46ca-8557-c6fa1281f524

LXC Container:
Code:
arch: amd64
cores: 2
description: <div align='center'><a href='https://Helper-Scripts.com' target='_blank' rel='noopener noreferrer'><img src='https://raw.githubusercontent.com/tteck/Proxmox/main/misc/images/logo-81x112.png'/></a>%0A%0A  # Alpine-Docker LXC%0A%0A  <a href='https://ko-fi.com/proxmoxhelperscripts'><img src='https://img.shields.io/badge/&#x2615;-Buy me a coffee-blue' /></a>%0A  </div>%0A
features: keyctl=1,nesting=1
hostname: docker
memory: 4096
mp0: local-lvm:vm-200-disk-1,mp=/docker,backup=1,size=60G
mp1: /mnt/NAS/share/,mp=/mnt/nas
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=BC:24:11:52:56:83,ip=192.168.1.40/24,type=veth
onboot: 1
ostype: alpine
rootfs: local-lvm:vm-200-disk-0,size=8G
swap: 512
tags: proxmox-helper-scripts
unprivileged: 1
 
use boot order and start delay in the GUI. This is exactly what I do. I have a VM thats hosting an SMB share which is required as a mount point for an LXC which won't boot unless the VM is up first.
 
Hi, thanks for your reply!
So having a fixed amount of delay seconds is the only solution?
 
I think you might be able to do something with scripts and maybe an @reboot cron, but I'm no expert on this and the simple delay works fine for me.
 
  • Like
Reactions: SGr33n
I recent setup something similar as I wanted my NAS to startup first and shutdown last. On the TrueNAS VM select Options then you can Edit the Start/Shutdown order. I Set the order =1 and Startup delay = 180, and left the shutdown = default. You do not need to enter anything on the other LXC's or VM's for this setting. Setting TrueNAS order to 1 will be the first VM to startup, and then the reverse priority on shutdown it will be the last one to shutdown.
 
Last edited:
  • Like
Reactions: SGr33n
On the TrueNAS VM select Options then you can Edit the Start/Shutdown order. I Set the order =1 and Startup delay = 180, and left the shutdown = default.
That just say - if you don't use a cluster with ha config for the vm+lxc as this menue points are ignored then - and the host is booting it will wait 3 min until the truenas vm is started as first. Don't know how long pve will wait until give the next vm/lxc start cmds but could be the lxc is still faster up than the truenas vm.
 
  • Like
Reactions: SGr33n
Alternatively, you could disable auto-start for these containers and instead run a script via cron/@reboot that will monitor your NAS availability, and only then do pct start on the containers that need NAS.
Right, sometimes there are requirements which cannot be done the implemented way and you must be a little bit innovative to found your one way by scripts which could run mostly automatically and just leave the pve implemented ways unused.
 
e.g. I prefer using AutoFS (it recovers nicely) so I don't have to care about mounting. hence something like this:

Bash:
#!/bin/bash

# Check interval in seconds
CHECK_INTERVAL=5
# Maximum wait time in seconds (5 minutes)
MAX_WAIT=300

# Find containers with /cifs in their config
mapfile -t CONTAINERS < <(
    grep -l "/cifs" /etc/pve/lxc/*.conf 2>/dev/null | \
    xargs -I{} basename {} .conf
)

# Verify we found containers
if [[ ${#CONTAINERS[@]} -eq 0 ]]; then
    echo "Error: No containers with '/cifs' found in configuration files"
    exit 1
fi

echo "Found containers to start: ${CONTAINERS[*]}"

start_time=$(date +%s)
error_sent=0

echo "Monitoring /cifs/syn.bruc availability..."

while true; do
    current_time=$(date +%s)
    elapsed_time=$((current_time - start_time))

    # Send error notification if timeout reached and not already sent
    if [[ $elapsed_time -ge $MAX_WAIT ]] && [[ $error_sent -eq 0 ]]; then
        echo "Timeout reached! Sending error notification..."
        mosquitto_pub -h mqtt.bruc -t bruc/living/server/spino/error -m "can't connect to NAS"
        error_sent=1
    fi

    # force AutoFS to mount the directory
    ls -l /cifs/syn.bruc 2>/dev/null
    # now check if directory has magically appeared
    if [[ -d "/cifs/syn.bruc" ]]; then
        echo "Directory found! Starting containers..."
    
        # Start all containers in sequence
        for container in "${CONTAINERS[@]}"; do
            echo "Starting container $container"
            pct start "$container"
            sleep 1  # Add small delay between starts
        done
    
        echo "All containers started successfully."
        exit 0
    fi
 
    sleep $CHECK_INTERVAL
done
 
Last edited:
  • Like
Reactions: waltar
Yeah, shell scipting gets knowledge and lots of fun when it's working too - go on and have fun ! :)
 
  • Like
Reactions: bogorad