Proxmox Helper Scripts - Do you all just spin up many LXCs or try to combine them??

HeneryH

Member
Jan 20, 2023
12
1
8
I know that using LXC Containers is more resource friendly than using a full VM because you share the kernel and some other things among all the containers. Cool.

Now I have run in to many different apps that are easily deployable using the Helper Scripts and it is super cool that one-click and they are up and running.

I'm sure many of you have experienced the same.

But... is there any worry about wasting resources by having so many LXC Containers running for some pretty simple apps? Do you spend any time trying to collect them onto a single LXC Portainer for instance?

I started with Home Assistant and saw that it spun up a Portainer. Great. But then I add another and it spins up a separate Portainer for the app.

Am I worrying about nothing?
 
Containers are really very lightweight. While a typical (virtualized) Linux system might have a few dozen processes running, most of them are idle the majority of the time and all of that memory can be evicted. So, as a first approximation, the marginal cost in terms of RAM is relatively minor. And in that case, having separate containers instead of one big one results in much easier maintenance.

Maybe, if you extensively used Docker, the argument would be a little different. Docker is its own virtualization solution that comes with its own set of maintenance tools. If that abstraction works for you, having a single (virtualized) Docker host might or might not fit your use case. That's ultimately a personal decision.

This leaves the question of disk usage. Having multiple (almost identical) containers can increase disk usage. Most containers are relatively small, and disk is cheap. But this could still turn out to be a problem over time. Even if you have a linked container on filesystem that supports shared storage (e.g. ZFS), deduplication is usually turned off. That means, as you keep upgrading your containers, they have increasingly more disk blocks that are identical but that get stored separately.

If that's a problem, you have to manually deduplicate by repairing the reference to the shared template. This is a little tedious but can be done with standard userland tools. I don't really need this as I have enough disk storage, but I did create a script that can do the job:

Bash:
#!/bin/bash -e

# This script allows for space-efficient linking of containers to their
# template. Once the template has diverged significantly, the script can
# be re-run to rebase the container and recover any wasted space.
#
# By passing the "--full" command line option, the linking with the
# template is broken and a stand-alone copy is created.
#
# Snapshots can either be "--preserve"d or "--prune"d from the new copy.

export LC_ALL=C
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

# Command line arguments.
linked=t
preserve=t
verbose=
while [[ "${1}" =~ ^- ]]; do
  case "${1}" in
    --) shift; break 2;;
    --linked) linked=t;;
    --full) linked=;;
    --preserve) preserve=t;;
    --prune) preserve=;;
    --verbose) verbose=-v;;
    *) set -- ;;
  esac
  shift
done

desc="/etc/pve/nodes/$(uname -n)/lxc/${1//\//}.conf"
templ="/etc/pve/nodes/$(uname -n)/lxc/${2//\//}.conf"
[ $# -eq 2 -o \( $# -eq 1 -a -z "${linked}" \) ] || {
  cat >&2 <<EOF
Usage: ${0##*/} [--linked] [--full] [--preserve] [--prune] [--verbose] <container id> <template id>"
  --linked:   Create a linked clone from a ZFS snapshot of the template.
  --full:     Create a full copy that doesn't require the snapshot.
  --preserve: Preserve snapshots if present.
  --prune:    Discard any existing snapshots from container.
  --verbose:  Print verbose progress messages.
EOF
  exit 1
}

# Find the storage location for this container. This assumes that Proxmox
# uses ZFS. The rest of the script needs ZFS features to manage snapshots.
[ -r "${desc}" ] || {
  echo "Cannot find configuration file for container $1" >&2; exit 1; }
host="$(sed 's/^hostname: //;t1;d;:1;q' "${desc}")"
rootfs="$(sed 's/^rootfs: //;t1;d;:1;q' "${desc}")"
pool="${rootfs%%:*}"
rootfs="${rootfs#*:}"
opt="${rootfs#*,}"
rootfs="${rootfs%%,*}"
vol="$(sed -n "/^zfspool: ${pool}$/,/^$/{s/.*pool //;t1;d;:1;p;q}" \
              /etc/pve/storage.cfg)"
rootdir="$(zfs list "${vol}" 2>/dev/null|awk 'END { print $5 }')/${rootfs##*/}"
zfspath="${vol}/${rootfs##*/}"

# If we couldn't find the ZFS directory where the container is mounted, abort
# now before things go wrong later.
[ -n "${rootdir}" -a "${rootdir}" != '/' -a -d "${rootdir}" ] &&
  mountpoint -q "${rootdir}" || {
  echo "Cannot find base disk for container ${host} [${1}]" >&2
  exit 1
}

if [ $# -eq 1 ]; then
  templ=
else
  # Find the storage location for this template. This assumes that Proxmox
  # uses ZFS. The rest of the script needs ZFS features to manage snapshots.
  [ -r "${templ}" ] || {
    echo "Cannot find configuration file for template $2" >&2; exit 1; }
  thost="$(sed 's/^hostname: //;t1;d;:1;q' "${templ}")"
  trootfs="$(sed 's/^rootfs: //;t1;d;:1;q' "${templ}")"
  tpool="${trootfs%%:*}"
  trootfs="${trootfs#*:}"; trootfs="${trootfs%%,*}"
  tvol="$(sed -n "/^zfspool: ${tpool}$/,/^$/{s/.*pool //;t1;d;:1;p;q}" \
                 /etc/pve/storage.cfg)"
  trootdir="$(zfs list "${tvol}" 2>/dev/null |
              awk 'END { print $5 }')/${trootfs##*/}"
  tzfspath="${tvol}/${trootfs##*/}"

  # If we couldn't find the ZFS directory where the template is mounted, abort
  # now before things go wrong later.
  [ -n "${trootdir}" -a "${trootdir}" != '/' -a -d "${trootdir}" ] &&
    mountpoint -q "${trootdir}" || {
    echo "Cannot find base disk for template ${thost} [${2}]" >&2
    exit 1
  }

  # We only allow rebasing onto a template.
  grep -q '^template: 1' "${templ}" &&
  [[ "$(zfs list -t snapshot "${tzfspath}@__base__" 2>&1||:)" =~ @__base__ ]]||{
    echo "Container ${thost} [${2}] does not appear to be a template" >&2
    exit 1
  }
fi

# The container cannot be running while we do this.
[[ "$(pct status "${1}")" =~ "stopped" ]] || {
  echo "Container ${host} [${1}] must be stopped before rebasing" >&2
  exit 1
}

# Move the old container storage out of the way. Keep track of all operations
# that we have done so far. That allows us to undo changes, if anything fails
# unexpectedly.
olddir="${rootdir}.$$.old"
oldpath="${zfspath}.$$.old"
undo=( 'zfs rename "${oldpath}" "${zfspath}"' )
trap 'trap "" INT TERM QUIT HUP EXIT ERR
      echo "Rebasing failed; undoing all changes..."
      for i in "${undo[@]}"; do eval "${i}" || :; done
      exit 1' INT TERM QUIT HUP EXIT ERR
zfs rename "${zfspath}" "${oldpath}"

# Create a new linked clone of the template in place of the old container.
# If linking wasn't requested, create a new empty zfs filesystem.
# We understand a limited number of filesystem options that we retrieved from
# the Proxmox configuration file.
undo=( 'zfs destroy "${zfspath}"' "${undo[@]}" )
zfsopt="${opt//size=/refquota=}"
zfsopt="-o acltype=posix -o xattr=sa ${zfsopt:+-o }${zfsopt//,/ -o }"
if [ -n "${linked}" ]; then
  [ -z "${verbose}" ] || echo "Cloning ${host} [$1] from ${thost} [$2]" >&2
  zfs clone ${zfsopt} "${tzfspath}@__base__" "${zfspath}"
else
  [ -z "${verbose}" ] || echo "Creating stand-alone copy of ${host} [$1]" >&2
  zfs create ${zfsopt} "${zfspath}"
fi

# Copy snapshots if "--preserve" option was in effect. Otherwise, only copy
# the most recent version of the filesystem.
for snapshot in $([ -z "${preserve}" ] || zfs list -t snapshot "${oldpath}" |
                  sed 's/^[^@]*@\(\S*\)\s.*/\1/;t;d') ""; do
  # Rsync is a great way to create a complete and accurate copy of the old data.
  # It only writes data, if necessary (i.e. if changed).
  rsync ${verbose} -HAXaxyS --inplace --delete-after --no-whole-file \
        "${olddir}/${snapshot:+.zfs/snapshot/${snapshot}/}" "${rootdir}/"
  if [ -n "${snapshot}" ]; then
    [ -z "${verbose}" ] || echo "Preserving snapshot \"${snapshot}\"" >&2
    zfs snapshot "${zfspath}@${snapshot}"
  fi
done

# Remove snapshots from configuration file, if the new copy was created without
# them.
if [ -z "${preserve}" ]; then
  [ -z "${verbose}" ] || echo "Deleting snapshots (if any)" >&2
  sed -i '/^parent:/d;/^\[[^]]\+\]$/,//d' "${desc}"
  sed -i '${/^$/d}' "${desc}"
fi

# Adjust "rootfs" depending on whether the new copy is a linked clone.
sed -i 's`^rootfs:.*`rootfs: '"${pool}:${linked:+${trootfs##*/}/}${rootfs##*/},${opt}"'`' \
    "${desc}"

# Done successfully
trap '' EXIT
[ -z "${verbose}" ] || echo "Deleting backup of previous ${host} [$1]" >&2
zfs destroy -r "${oldpath}"
sync

Of course, this only makes sense, if your template file is actually up-to-date, so you need a way to occasionally open it interactively, make all the changes that you need (e.g. apt update && apt -y dist-upgrade) and then seal it again. I have another script that helps with this task:

Bash:
#!/bin/bash -e

# This script can be invoked as "edit-template" to make interactive changes
# to a PVE template, or as "commit-template" to commit changes to ZFS that
# have been made by external scripts.
#
# An optional command line argument specifies the id of the container.
#
# This script makes potentially invasive changes to the underlying ZFS
# storage side-stepping PVE's management of templates and containers. It is
# possible to seriously damage your cluster and you should only run this
# script if you have backups and feel comfortable with repairing any damage
# that could be caused.

export LC_ALL=C
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

# If no arguments are given, assume that the default template has the id 100
templ="${1:-100}"

# Read a keypress without echo'ing nor requiring a RETURN
getkey() {
  (
    trap 'stty echo -iuclc icanon 2>/dev/null' EXIT INT TERM QUIT HUP
    stty -echo iuclc -icanon 2>/dev/null
    dd count=1 bs=1 2>/dev/null
  )
}

# Find the local configuration file for this container
desc="/etc/pve/nodes/$(uname -n)/lxc/${templ}.conf"
[ -r "${desc}" ] || {
  echo "Container ${templ} not found on $(uname -n)" >&2
  exit 1
}

# Check that this is in fact an LXC template
host="$(sed 's/^hostname: //;t1;d;:1;q' "${desc}")"
grep -q '^template: 1' "${desc}" || {
  echo "Container ${host} [${templ}] does not appear to be a template" >&2
  exit 1
}

# Find the storage location for this template. This assumes that Proxmox
# uses ZFS. The rest of the script needs ZFS features to manage snapshots.
rootfs="$(sed 's/^rootfs: //;t1;d;:1;q' "${desc}")"
pool="${rootfs%%:*}"
rootfs="${rootfs#*:}"; rootfs="${rootfs%%,*}"
vol="$(sed -n "/^zfspool: ${pool}$/,/^$/{s/.*pool //;t1;d;:1;p;q}" \
              /etc/pve/storage.cfg)"
rootdir="$(zfs list "${vol}" 2>/dev/null|awk 'END { print $5 }')/${rootfs##*/}"
zfspath="${vol}/${rootfs##*/}"

# If we couldn't find the ZFS directory where the template is mounted, abort
# now before things go wrong later.
[ -n "${rootdir}" -a "${rootdir}" != '/' -a -d "${rootdir}" ] &&
  mountpoint -q "${rootdir}" || {
  echo "Cannot find base disk for template ${host} [${templ}]" >&2
  exit 1
}

# Another sanity check. Templates have a "__base__" snapshot. If that doesn't
# exist, we don't know how to commit any of our changes.
[[ "$(zfs list -t snapshot "${zfspath}@__base__" 2>&1 || :)" =~ @__base__ ]] ||{
  echo "There is no \"__base__\" snapshot for template ${host} [${templ}]" >&2
  echo "This doesn't look like a well-formed Proxmox template" >&2
  exit 1
}

# If the script is invoked as "commit-template" instead "edit-template", don't
# bother with making any changes. Just move the ZFS snapshot and assume that
# the user made changes outside of this script.
if ! [[ "${0##*/}" =~ commit-template ]]; then
  # Undo any system-wide changes, if we terminate unexpectedly.
  trap 'trap "" INT TERM QUIT HUP EXIT ERR
        pct stop "${templ}" || :
        sed -i "s/template: 0/template: 1/" "${desc}"
        exit 1' INT TERM QUIT HUP EXIT ERR

  # Temporarily turn the template into a full container, start it, then
  # after the user interactively made changes, turn it back into a template.
  sed -i 's/template: 1/template: 0/' "${desc}"
  echo "Entering container ${host} [${templ}]..."
  pct start "${templ}"
  pct enter "${templ}" || :
  pct stop "${templ}"
  sed -i "s/template: 0/template: 1/" "${desc}"

  # Since we are using ZFS snapshots, we might as well give the user one
  # last chance to abandon their changes.
  echo -n "Commit changes to container" \
          "$(sed 's/^hostname: //;t1;d;:1;q' "${desc}") [${templ}] (Y/n)"
  while :; do
    c="$(getkey | tr a-z A-Z)"
    case "${c^^}" in
      ''|Y) echo " yes"
            break
            ;;
      N)    echo " no"
            zfs rollback "${zfspath}@__base__"
            trap '' EXIT
            exit 0
            ;;
      *)    tput bel
            ;;
    esac
  done
fi

# Clear out the log files and SSH host keys. Then leave a marker that this is
# a template. That information can be very useful in scripts that are invoked
# from /usr/share/lxc/hooks
find "${rootdir}/"{tmp,run,var/cache,var/log} -type f -print0 |
  xargs -0 rm >&/dev/null || :
rm -rf "${rootdir}/"{etc/machine-id,var/log/journal/*,var/tmp/*}
find "${rootdir}/etc/ssh" -type f -name ssh_host\*key\* -print0 |
  xargs -0 rm >&/dev/null || :
rm -f "${rootdir}/"root/.bash_history
touch "${rootdir}/.is-template"

# Move the "__base__" snapshot forward to the current state, thus committing all
# changes. In the easiest case, we delete the old snapshot and then create a new
# one.
#
# But this gets more complex, if there are linked clones referenced from other
# containers. Deletion isn't possible, but in that case, we can instead rename
# the snapshot to a new and unique name.
#
# This can leave orphaned snapshots, if the linked container is later deleted.
# Now, would be a good time to garbage collect.
zfs list -t snapshot "${zfspath}" |
  sed 's/^[^@]*@\(__clone_[^_]*__\)\s.*/\1/;t;d' |
  while read -r clone; do
    # Any orphaned snapshot that matches the pattern __clone_XXX__ is destroyed.
    zfs list -o origin | grep -qF "${clone}" ||
      zfs destroy "${zfspath}@${clone}"
  done
if zfs list -o origin -r "${vol}" | grep -qF "${zfspath}@__base__"; then
  i=0
  while zfs list -t snapshot "${zfspath}" |
        sed '1d;s/^[^@]*@\(\S\+\).*/\1/' |
        grep -qF "__clone_${i}__"; do
    # Keep increasing the serial number, until we find a unique __clone_XXX__
    i=$((i+1))
  done
  # Rename the current snapshot, so that we reuse the __base__ label. Future
  # containers that are clone from this template will be based on the newly
  # edited state of the template. Older linked containers are unaffected.
  zfs rename "${zfspath}@__base__" "${zfspath}@__clone_${i}__"
else
  # If there aren't any linked containers, simply delete and recreate the
  # snapshot.
  zfs destroy "${zfspath}@__base__" >&/dev/null || :
fi
# This is now the new state of the template.
zfs snapshot "${zfspath}@__base__"

trap '' EXIT
exit

Please note that I assume you are using ZFS, as that's what I use. If you are using something else, I unfortunately have no idea what to do. Both of these scripts side-step a lot of the abstractions that PVE provide and directly manipulate the underlying ZFS storage. So, that might or might not be what you want.
 
Last edited:
You hit the root of my question. Sounds like LXC containers are light weight enough that having separate container stacks for each "Proxmox Application" is not that big of a deal.
 
You hit the root of my question. Sounds like LXC containers are light weight enough that having separate container stacks for each "Proxmox Application" is not that big of a deal.
Yes, in most practical scenarios, the overhead per container shouldn't really matter much. If you have a really tiny host, then every byte counts, and this assumption might not hold true. And we do occasionally see people trying to run Proxmox on a single Raspberry Pi or spare left-over laptop with only 4GB of RAM and 128GB of disk. But that's the exception rather than the rule. On anything even moderately more performant, I'd recommend using dedicated containers until that becomes a problem and only then would I investigate alternatives. The maintenance benefits of one-container-per-application are significant. You can pack a ton of containers on a single node. VMs are a little more resource-intensive, but even those pack surprisingly better than I would have naively assumed. I still prefer containers though, when all I care about virtualizing is a single well-defined app.
 
I'm new to both proxmox and LXC's and while I get that when it comes to resource usage we can draw some parallels to docker containers in it being light weight, however LXC's at least to me feel way different. I would have multiple containers for my apps, however with tools like docker compose, i would have centralized place where they are defined and grouped together. Also SSH-ing into them, and keeping the base system updated just isn't a concern in a way. And here is where my issue is. While they are containers, at least compared to docker they feel like VM's when it come to how I interact with them, and having so many apps just doesn't "feel right" in a way. Am i missing something here at least from mainence perspective?
 
  • Like
Reactions: Johannes S
I stopped using LXC containers. Docker containers are just easier all around. Easier to spin up, keep updated and to manage, It is also much easier to persist data using docker containers with an NFS (or SMB) server, and the docker NFS driver. I spin up a VM to act as my docker host and off I go. I think it probably uses fewer resources as well. I am running 21 different docker containers, and all totaled they consume about 5gb of memory, and 16GB of virtual disk for the OS and docker. All of the docker volumes are stored separately on my NAS.
 
  • Like
Reactions: Johannes S
Yeah, this is a very intriguing problem for me, and right now, my whole home lab is down while I try to decide the route I want to take. Creating a VM, dumping all my apps into that VM with docker is definitely the simplest of options, but I just can't decide on my security level, having things like Jellyfin which I share with my family, and Immich which only I use on the same host feels a bit wrong, and separating them into different VM's again is not an option, as I have to pass through my GPU, which to my understanding when it comes to VM's can only be passed to one. But on the other hand, running multiple LXC's is the same thing, multiple containers on the same host, they just feel different, so I may very well just throw in the towel and go with the simpler option and stop overthinking this, it's only local anyway? :D
 
Well that's why it is a home lab. Experiment and do it all three ways. I have multiple Proxmox hosts primarily for this reason. I run a couple of services that really can't/shouldn't go down, or I will bear the wrath of my family, so I experiment in a second host. I started out running most everything in its own VM, but have gradually moved most services to docker. There is a definite learning curve with docker, but once you get the hang of it, it ends up being a LOT easier than VMs. I don't use GPU pass through nor any of the -ARR suite, so YMMV in that regard.
 
  • Like
Reactions: Johannes S
I don't use helper scripts at all since I consider them to not be very helpful in the end: People tend to use them to setup their services instead of learning how to actually do this. So if they run into problems they don't know where to start. It's also not a good practice to download something from the Internet and run it without checking what't actually doing ( supply-chain-attack). I think that for most usecases it's more practical to just use docker or something similiar (like podman) to spin up a service if you don't want to went through the hassle to setup everything by hand.
But with docker it's not so practical to use lxcs anymore: The Proxmox developers recommend against using docker inside lxcs since they tend to break after updates:
https://pve.proxmox.com/wiki/Linux_Container

Instead you would use a lightweight linux vm (e.G. Debian or alpine for docker and some Redhat Derivate for podman). If you install everything in one vm this will still be suitable for limited resources. Another benefit: Instead of maintaining (e.G. installing system updates and further housekeeping tasks) several lxcs you have just one or two (if you want to seperate between internal (accesible just inside your lan) and external (accesible from the internet) services) VMs. And of course you can install something like nginx proxy manager or portainer too inside the vm to ease managment, certificates deployment etc pp.

Another problem of lxcs is that they can't directly mount network shares (expect via bind mounts) if they are unprivileged (which is the recommended setup and the only one I would consider for any service reachable from outside of your LAN). Instead you need to setup bind mounts and user id mapping which can be quite a hassle, especially for beginners. This is also trouble you don't have with a vm
This doesn't mean that I don't use lxcs at all:
They are fine for stuff which don't need neither docker nor network shares (e.G. pihole or adguard or trying out a linux distribution in a kind of playground environment). I also use an LXC container to have a kind of graphical terminal server ( https://forum.proxmox.com/threads/lxc-template-ubuntu-business-desktop-converted-to-lxc.27998/ ). It run's as a privileged container but that's not much of a problem since it's only reachable inside my home network and for graphical applications I don't want the performance hit of a vm. Another usecase of LXCs is when I need to use the hosts hardware but can't or don't want to passthrough it to a VM e.G. using the iGPU for hardware transcoding with Plex or Jelylfin: A passthrough of the iGPU to a vm would mean that I couldn't use the console of the host anymore, with a lxc the host and lxcs can share the iGPU.

So they have their usecases, but the popularity of xcs and helper scripts on reddits /r/proxmox and /r/homelab subs just feel wrong for me:
Just because many people do it doesn't mean it's a good idea.
 
  • Like
Reactions: UdoB