Problems starting LXC with PBS

xmftech

New Member
Jan 11, 2025
18
0
1
Hello,

I'm writing to raise a problem I have with my LXC container with PBS.
Some time ago I found a video that explained how to mount an NFS share on my Porxmox VE as storage (which I later changed to SMB), this resource was added to the LXC container options as a mount point. Once PBS is started, the Datastore is created on this mount point. Once the Datastore is created, a new storage is created from Proxomox VE which is then used by the backup service to make scheduled copies.

When the host starts everything works correctly. The container starts, the mount points and storages work, etc...

The problem starts when I decide that I don't want Proxmox to be constantly accessing my NAS since this way it can't go to sleep, it consumes energy uselessly and I only make backups from 4 to 5 in the morning.

The issue is that I have set up a Cron script to disable storage and stop the container, and when it is time to start the container it fails. I have checked everything I can think of but I don't know what else to look at.

The error when I start the container is the following:

Code:
run_buffer: 571 Script exited with status 255
lxc_init: 845 Failed to run lxc.hook.pre-start for container "112"
__lxc_start: 2034 Failed to initialize container "112"
TASK ERROR: startup for container '112' failed

And the output of the pct start command with debug is this:

Code:
run_buffer: 571 Script exited with status 255
lxc_init: 845 Failed to run lxc.hook.pre-start for container "112"
__lxc_start: 2034 Failed to initialize container "112"
0 hostid 100000 range 65536
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "112", config section "lxc"
DEBUG    utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 112 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/mp0: can't read superblock on /dev/loop0.
dmesg(1) may have more information after failed mount system call.

DEBUG    utils - ../src/lxc/utils.c:run_buffer:560 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 112 lxc pre-start produced output: command 'mount /dev/loop0 /var/lib/lxc/.pve-staged-mounts/mp0' failed: exit code 32

ERROR    utils - ../src/lxc/utils.c:run_buffer:571 - Script exited with status 255
ERROR    start - ../src/lxc/start.c:lxc_init:845 - Failed to run lxc.hook.pre-start for container "112"
ERROR    start - ../src/lxc/start.c:__lxc_start:2034 - Failed to initialize container "112"
INFO     utils - ../src/lxc/utils.c:run_script_argv:587 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "112", config section "lxc"

TASK ERROR: startup for container '112' failed

Can someone tell me what I can check?

Thank you
 
Hi,
Some time ago I found a video that explained how to mount an NFS share on my Porxmox VE as storage (which I later changed to SMB), this resource was added to the LXC container options as a mount point. Once PBS is started, the Datastore is created on this mount point. Once the Datastore is created, a new storage is created from Proxomox VE which is then used by the backup service to make scheduled copies.
please allow me to note right away that this is not a fault tolerant setup at all, if your PVE storage is not available you cannot restore any of your backups. Please reconsider your setup, a dedicated host with fast local storage for the PBS is recommended, see https://pbs.proxmox.com/docs/installation.html#system-requirements.

The issue is that I have set up a Cron script to disable storage and stop the container, and when it is time to start the container it fails. I have checked everything I can think of but I don't know what else to look at.
Did you verify that the storage on which the montpoint disk resides on is actually online and seen as available by PVE at the point you are trying to start the container? Check the output of pvesm status at that point.
 
  • Like
Reactions: Johannes S
Hi,

please allow me to note right away that this is not a fault tolerant setup at all, if your PVE storage is not available you cannot restore any of your backups. Please reconsider your setup, a dedicated host with fast local storage for the PBS is recommended, see https://pbs.proxmox.com/docs/installation.html#system-requirements.

I know that probably it's not the best setup, but, at this moment, I have only one MiniPC with Proxmox. I mounted PBS VM to try PBS and to have one backup more. This PBS stores entire datastore as image inside a Synology NAS.

Captura de pantalla de 2025-04-10 22-24-01.png

In a near future I'm studying mount 2nd MiniPC with OPNsense or pfSense and maybe I can mount PBS inside new node in that machine. I not mounted OPNsense/pfSense for now because I thinked the best way is to dedicate a entire machine to that to ensure that Internet connection it's always working, but virtualize and have a new host it's an option over the table.

Initially I mounted my backup scenario following this video (it's in Spanish) with NFS. After that I switched to SMB because I'm more confortable with permissions with users and passwords than with IP authorization like NFS does.

I don't know if there's a way to avoid mounting a Storage in PVE and doing the mountpoint inside the PBS directly against NAS.

I also have Tuxis backup so it's not the only backup.

Did you verify that the storage on which the montpoint disk resides on is actually online and seen as available by PVE at the point you are trying to start the container? Check the output of pvesm status at that point.

Here's the output of pvesm:

Code:
root@host1:~# pvesm status
pbs-copies-nas-synology: error fetching datastores - 500 Can't connect to 192.168.1.238:8007 (No route to host)
Name                           Type     Status           Total            Used       Available        %
local                           dir     active       100597760        23617788        76979972   23.48%
local-lvm                   lvmthin     active       365760512        91366975       274393536   24.98%
nas-synology-copies            cifs     active      1911492164       329542540      1581949624   17.24%
pbs-copies-nas-synology         pbs   inactive               0               0               0    0.00%
pbs-copies-tuxis                pbs     active       157286400        42337408       114948992   26.92%

I think that the problem it's that when I stop the PBS something remains mounted or trying to connect to some location but I don't know how to check it.

I also have this cron script in /var/spool/cron/crontabs/root but I have the problem when I stop/restart the PBS via the GUI so I think that is normal that CRON does the same:

Code:
03 4 * * * /usr/sbin/pvesm set nas-synology-copies -disable 0
05 4 * * * /usr/sbin/pct start 112
08 4 * * * /usr/sbin/pvesm set pbs-copies-nas-synology -disable 0

55 4 * * * /usr/sbin/pvesm set pbs-copies-nas-synology -disable 1
56 4 * * * /usr/sbin/pct stop 112
59 4 * * * /usr/sbin/pvesm set nas-synology-copies -disable 1
 
Last edited: