All Helper Scripts Fail

JakeBr

New Member
Feb 24, 2026
1
0
1
Hey there, I'm new to Proxmox and LXC so I've been trying to stand up apps using helper scripts. Every attempt—regardless of the application—fails with Exit Code 255approximately 2 minutes after creation.

The Symptoms:
  1. The script successfully creates the container.
  2. I manually start the container.
  3. The script hangs for about 2 minutes.
  4. After the timeout, it fails with: in line 3814: exit code 255 (DPKG: Fatal internal error): while executing command pct start "$CTID".
The Environment:
  • PVE Version: 9.1.5 (pve-manager: 9.1.5)
  • Kernel: 6.17.9-1-pve
  • Storage: LVM-Thin (pve/data)
  • Template: debian-13-standard_13.1-2_amd64.tar.zst
What I have tried:
  1. AppArmor: Currently disabled at the kernel level (apparmor=0 in GRUB). aa-status confirms it is not running.
  2. Max Release: Modified /usr/share/perl5/PVE/LXC/Setup/Debian.pm to allow versions up to 15.
  3. Tested both Privileged and Unprivileged modes; same result.
  4. Nesting and Keyctl enabled or disabled; same result.

Any input or suggestions would be greatly appreciated.


Screenshot 2026-02-24 140618.png
 
Last edited:
I've had the same error installing the smokeping script, its a different line number but the same effect. The error isn't generated by the install script as such, but by DPKG generating a Fatal error. The script is trying to start a container using its containerID.

My environment:
  • PVE Version 8.4.11 (Kernel: 6.8.12-13-pve) pve-manager/8.4.11
  • Template debian-13-standard_13.1-2_amd64.tar.zst [local]
  • Storage: nas mounted share
So there's something else going on, a swift look in dpkg.log shows [in my case]

status triggers-pending libc-bin:amd64 2.36-9+deb12u10
2026-04-21 11:09:28 status half-configured lxc-pve:amd64 6.0.0-1
2026-04-21 11:09:28 status unpacked lxc-pve:amd64 6.0.0-1
2026-04-21 11:09:28 status half-installed lxc-pve:amd64 6.0.0-1
2026-04-21 11:09:28 status triggers-pending man-db:amd64 2.11.2-2
2026-04-21 11:09:28 status unpacked lxc-pve:amd64 6.0.0-2
2026-04-21 11:09:28 upgrade pve-container:all 5.3.0 5.3.3
2026-04-21 11:09:28 status triggers-pending pve-ha-manager:amd64 4.0.7
2026-04-21 11:09:28 status triggers-pending pve-manager:all 8.4.11
2026-04-21 11:09:28 status half-configured pve-container:all 5.3.0
2026-04-21 11:09:28 status unpacked pve-container:all 5.3.0
2026-04-21 11:09:28 status half-installed pve-container:all 5.3.0
2026-04-21 11:09:28 status unpacked pve-container:all 5.3.3

This is because the smokeping install script prompted me with
 An update for the Proxmox LXC stack is available
pve-container: installed=5.3.3 candidate=5.3.4
lxc-pve : installed=6.0.0-2 candidate=6.0.0-2

[1] Upgrade LXC stack now (recommended)
[2] Use an older debian template instead (may not work with all scripts)
[3] Cancel

I chose option 1, which means I have broken the dpkg installation on this node [I have a two, (yeah I know), node cluster, it will be three], running the install script on the second node generates the Update prompt also, so I am going to fix dpkg first.


Tried fixing dpkg, ran apt update, ran apt upgrade, node now completely borked. Will not boot using any existing boot image nor into recovery mode, stalls at about 5 seconds in.. It is version 8.x.. - putting debug onto the grub boot showed a very slow disk mount, so more patience required.

What I should have noticed at the end of the upgrade were two messages:

Run this:

echo 'grub-efi-amd64 grub2/force_efi_extra_removable boolean true' | debconf-set-selections -v -u

Then this:

apt install --reinstall grub-efi-amd64
Before rebooting.
My second node was fine, the first node is still very slow to boot, without a console attached it looks frozen, but its taking two minutes to mount a disk, still figuring out what is happening.
 
Last edited:
Hey there, I'm new to Proxmox and LXC so I've been trying to stand up apps using helper scripts. Every attempt—regardless of the application—fails with Exit Code 255approximately 2 minutes after creation.

The Symptoms:
  1. The script successfully creates the container.
  2. I manually start the container.
  3. The script hangs for about 2 minutes.
  4. After the timeout, it fails with: in line 3814: exit code 255 (DPKG: Fatal internal error): while executing command pct start "$CTID".
The Environment:
  • PVE Version: 9.1.5 (pve-manager: 9.1.5)
  • Kernel: 6.17.9-1-pve
  • Storage: LVM-Thin (pve/data)
  • Template: debian-13-standard_13.1-2_amd64.tar.zst
What I have tried:
  1. AppArmor: Currently disabled at the kernel level (apparmor=0 in GRUB). aa-status confirms it is not running.
  2. Max Release: Modified /usr/share/perl5/PVE/LXC/Setup/Debian.pm to allow versions up to 15.
  3. Tested both Privileged and Unprivileged modes; same result.
  4. Nesting and Keyctl enabled or disabled; same result.

Any input or suggestions would be greatly appreciated.


View attachment 96067

I would run apt update then apt dist-upgrade and reboot the node, then try to install again [N.B. post edited after it was quite rightly pointed out that not using apt dist-upgrade can break PVE, I don't want to propagate bad advice into search results].


AND check the status of the dpkg log files iin /var/log - expect to see each package update appear at least three times, for example

Code:
2026-04-23 08:41:23 trigproc man-db:amd64 2.11.2-2 <none>
2026-04-23 08:41:23 status half-configured man-db:amd64 2.11.2-2
2026-04-23 08:41:25 status installed man-db:amd64 2.11.2-2
I'm no expert, but as long as you haven't got any incomplete packages your updates should be good.
 
Last edited:
OK, could you explain why not - it would be much appreciated - thank you, if its not best practice it would be good to know what is.

N.B. I have changed my forum settings to dark mode, high contrast helps readability so links become clear, the default Orange / Grey on white isn't quite so good for my vision.
 
Last edited:
OK, could you explain why not - it would be much appreciated - thank you, if its not best practice it would be good to know what is.
It's literally in the link posted by impact where Proxmox developer Fiona Ebner explains it: Because due to proxmox-specific differences to vanilla Debian things might break otherwise. She also links some relevant forum and reddit threads with examples of breakage.
The pve manual also clearly status to use dist-upgrade: https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#system_software_updates
 
It's literally in the link posted by impact where Proxmox developer Fiona Ebner explains it: Because due to proxmox-specific differences to vanilla Debian things might break otherwise. She also links some relevant forum and reddit threads with examples of breakage.
The pve manual also clearly status to use dist-upgrade: https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#system_software_updates
OK, so it wasn't the specific of apt update, but in NOT performing an apt dist-upgrade.

As per https://pve.proxmox.com/wiki/Downlo...Proxmox_Virtual_Environment_8.x_to_latest_8.4.

That now makes sense, and I appreciate the guidance [note to self, Proxmox on Debian != Debian].
To get from 8.4 to 9.1 this is the correct path https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#Prerequisites.

What I am going to do is add another node to my cluster [resolving the cluster quorum challenge], built from an ISO using the latest 9.1-1 and go from there. But everything will be backed up first, and there will be no workload on the third node.
 
It's literally in the link posted by impact where Proxmox developer Fiona Ebner explains it: Because due to proxmox-specific differences to vanilla Debian things might break otherwise. She also links some relevant forum and reddit threads with examples of breakage.
The pve manual also clearly status to use dist-upgrade: https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#system_software_updates
My bad.

The almost hidden link is what I dislike most about modern browsers and usability, having to hover the mouse over a two word response to figure out its a hyperlink to something important is intensely frustrating.

I have indifferent eyesight, owing to spending 40+ years in front of a monitor, if its a link, it should look like one, not merely text in an alternate colour I shall have to look more closely, thank you for your patience.
 
What I am going to do is add another node to my cluster [resolving the cluster quorum challenge], built from an ISO using the latest 9.1-1 and go from there. But everything will be backed up first, and there will be no workload on the third node.
I would strongy recommend NOT to add a third node if it's just for quorum. First a real node also needs a dedicated corosync network adapter which a qdevice doesn't need: https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
This device doesn't need to do much it just must be able to run Debian (or any other Linux which ships the package), making things like old PCs or Raspberrys ideal for it.
Second it's possible to use ProxmoxBackupServer as qdevice, which also gives you the benefits of a dedicated backup device and a qdevice on the same physical machine.. There is one caveat though: By default after adding the qdevice each member of the cluster can also connect via ssh to the qdevice so any bad actor who take over your cluster then can also wreak havok on your backups. To avoid this you can remove the ssh access. Fellow community member @UdoB describes the process here in following post:
https://forum.proxmox.com/threads/f...nd-secure-it-by-disabling-root-access.181525/
To have a real seperation it might make sense to install the ProxmoxBackupServer and PVE bare-metal together on the node and afterwards setup a small Debian lxc or VM just for the qdevice. Note that you don't install the PBS as vm, the idea is to be able to have working restores even if the PVE breaks for whatever reason. Such a setup was described by Proxmox staff member aaron in following thread:
https://forum.proxmox.com/threads/2-node-ha-cluster.102781/post-442601
You also wouldn't add this single-node install to the cluster so everything stays isolated from each other.
Personally I would do both, so setting up a combined qdevice/PBS as explained by Aaron and afterwards secure it as explained by Udo.
 
Last edited:
  • Like
Reactions: rcw88 and UdoB
The link looks pretty orange and obvious to me :confused:
View attachment 97310

If you KNOW a link is Orange, I didn't and the significance was lost because a terse You shouldn't isn't clear to a novice, who hasn't spent years using this fantastic toolset, nor had the benefit of having seen the excellent email which explains WHY.

I should have spotted that it was a link, but I didn't, but the original point was that the OP had asked a question and the first answer said to look in the install scripts, I did and the problem isn't in the install script. I was just having a bad day, when I missed your reply contained a link, its not like the days of Firefox 2.0 when an unvisited link was blue and a visited link was purple, that's almost 30 years ago BTW.

I now know this forum has Orange links and I will try not to make a mistake like that again. I tripped over your Github hints, they are great... I run a dokuwiki site for the same reason, I have a memory like a sieve these days.
 
  • Like
Reactions: Impact
I would strongy recommend NOT to add a third node if it's just for quorum. First a real node also needs a dedicated corosync network adapter which a qdevice doesn't need: https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
This device doesn't need to do much it just must be able to run Debian (or any other Linux which ships the package), making things like old PCs or Raspberrys ideal for it.
Second it's possible to use ProxmoxBackupServer as qdevice, which also gives you the benefits of a dedicated backup device and a qdevice on the same physical machine.. There is one caveat though: By default after adding the qdevice each member of the cluster can also connect via ssh to the qdevice so any bad actor who take over your cluster then can also wreak havok on your backups. To avoid this you can remove the ssh access. Fellow community member @UdoB describes the process here in following post:
https://forum.proxmox.com/threads/f...nd-secure-it-by-disabling-root-access.181525/
To have a real seperation it might sense to install the ProxmoxBackupServer and PVE bare-metal together on the node and afterwards setup a small Debian lxc or VM just for the qdevice. Note that you don't install the PBS as vm, the idea is to be able to have working restores even if the PVE breaks for whatever reason. Such a setup was described by Proxmox staff member aaron in following thread:
https://forum.proxmox.com/threads/2-node-ha-cluster.102781/post-442601
You also wouldn't add this single-node install to the cluster so everything stays isolated from each other.
Personally I would do both, so setting up a combined qdevice/PBS as explained by Aaron and afterwards secure it as explained by Udo.

Johannes, thank you, I already have PBS on a separate machine, with a hardware raid backup disk and an iSCSI mount to a QNAP NAS.

My entire cluster runs on Intel Mac Mini's, on eBay, around 50-60 GBP, up to 16GB RAM, any size SSD you can afford, I7 quad core/8 thread if you can find them, low power and quiet. A four drive NAS and four mac minis uses less than 250W and takes up less space than a single old style tower PC.
Older ones have 400/800Gbit Firewire, but the disks are not recognised and I haven't figured out why, so they run over USB3, not mega fast, but fast enough.

The NAS offers regular CIFS with work fine, but iSCSI from the PBS has never worked - the initial
proxmox-backup-manager datastore create nasbackup /mnt/datastore/nas2_pmoxbackup command seemed to work OK, creating chunks, but its ext4 and backups to it fail, even though its writable from the CLI.


Newer minis have Thunderbolt, mine have Thunderbolt-Gigabit ethernet adapters and a completely isolated gigabit network over which to run corosync.
My two node cluster broke for a hardware problem, easily fixed, but that set me down the three node cluster option and I bought another mini - I had wondered if a PBS could become a component of the cluster and a qdevice would solve that.
 
My two node cluster broke for a hardware problem, easily fixed, but that set me down the three node cluster option and I bought another mini - I had wondered if a PBS could become a component of the cluster and a qdevice would solve that.
I'm not sure I understand what you want to achieve. You could installProxmoxVE and ProxmoxBackupServer baremetal on the node and afterwards add it to the cluster but I wouldn't do this: Since every cluster member can login via ssh to the other cluster member everybody who can access your cluster will also be able to access your backups. This is usually not what you want in case some bad actor managed to take over your cluster.
So if you want to be able to have a BackupServer and a qdevice on the same node a setup like the one by Aaron makes more sense: Installing ProxmoxVE and PBS bare metal on the same node but don't add it to the cluster. Instead setup a small vm/container just for the qdevice.

In your case however (with three MacMinis and a seperate PBS and sufficient network setup) a three-node cluster has the advantage that you will have more compute power in the cluster so
In theory you could even build a small Ceph cluster but due to your network constraints (for Ceph you will want at least 10 Gbit/s see https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/ ) I wouldn't do this. Stick with ZFS storage replication for HA.

I understood your correctly that your current hardware consists of a NAS, three Mac-Minis and another device for PBS?
 
  • Like
Reactions: UdoB
I totally get the concern about security with PBS being part of the cluster. Keeping backups separate and secure makes a lot of sense, and Aaron's setup of running PBS and qdevice on separate VMs/containers seems like a smart move.
One important point though: aaron doesn't run PBS as a VM or container. Instead it's installed on an existing PVE so it will still work if ProxmoxVE fails for whatever reason: https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-server-on-proxmox-ve

The only guest of ProxmoxVE is the qdevice. Such a setup has one caveat though: If (like with the upgrades of PVE8 to PVE9 or PBS3 to PBS4) the Debian base has a major upgrade you need to wait until an upgrade for PVE and PBS is available. But this isn't much of a deal you just need to wait one or two month after the first release (ususally PVE).
 
I'm not sure I understand what you want to achieve. You could installProxmoxVE and ProxmoxBackupServer baremetal on the node and afterwards add it to the cluster but I wouldn't do this: Since every cluster member can login via ssh to the other cluster member everybody who can access your cluster will also be able to access your backups. This is usually not what you want in case some bad actor managed to take over your cluster.
So if you want to be able to have a BackupServer and a qdevice on the same node a setup like the one by Aaron makes more sense: Installing ProxmoxVE and PBS bare metal on the same node but don't add it to the cluster. Instead setup a small vm/container just for the qdevice.

In your case however (with three MacMinis and a seperate PBS and sufficient network setup) a three-node cluster has the advantage that you will have more compute power in the cluster so
In theory you could even build a small Ceph cluster but due to your network constraints (for Ceph you will want at least 10 Gbit/s see https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/ ) I wouldn't do this. Stick with ZFS storage replication for HA.

I understood your correctly that your current hardware consists of a NAS, three Mac-Minis and another device for PBS?

Fourth device is another Mac Mini, but older and less memory, ZFS is something I am starting to explore - the replication feature is 'desirable' based on the possibility of needing to maintain some services.

If I was to go 10gig I'd need expensive, power hungry switches and a level of complexity I just don't need. Whilst I was still working I experimented with Openstack / Ceph - the update/refresh rate meant I spent more time maintaining it that using it. I just want something that's reliable, offers some redundancy for services. I could do everything on Raspberry Pi's, one for each service, but where's the fun in that? its just messy, more expensive to do properly and harder work than running a mini PC - on that I totally agree with @Impact
 
  • Like
Reactions: Johannes S
I totally get the concern about security with PBS being part of the cluster. Keeping backups separate and secure makes a lot of sense, and Aaron's setup of running PBS and qdevice on separate VMs/containers seems like a smart move.

As for the hardware, yeah, a NAS and the Mac Minis should give you enough resources for a stable cluster. I agree, Ceph might be a bit overkill given the network limitations. ZFS replication for HA sounds like a solid choice for now. How’s the network setup looking on your end? Do you have a 10G connection, or are you sticking with 1G for now?

1Gig switches for the primary network, completely separate 1gig switch for corosync. I binned all my Cisco switches after a house move, big, heavy, noisy, power hungry. Two Cisco's use more power than my entire setup. 1gig is fine, ZFS I just need to figure out, because as I built the systems I didn't appreciate the possible benefits, so there will be a lot of VM storage migration happening over time to make the drives available to rebuild as ZFS.
 
  • Like
Reactions: Johannes S