backup hook-script can't umount usb disk

Jan 23, 2020
22
1
8
26
I struggling now for hours.

I want to mount an external disk before backup and unmount it afterward. The mounting works fine as expected, but the unmounting not. It prints that it was unmounted: INFO: umount: /mnt/seagate (/dev/sdc2) unmounted. But in /proc/mounts it still appears and if I'm clicking on the dataset in PVE the disk spins up and all information is displayed. When I do the umount manually everything works. Also, my shell script which does the same umount -> hd-idle (was copied) works. But not inside the hook-script.

Hopefully, someone knows, what is going wrong here.

I have the following hook script for the backup job:

Bash:
#!/bin/bash

set -o errexit

case "${1}" in
  job-end)
    sleep 20s
    umount /mnt/seagate
    echo "Partition /dev/sdc2 ausgehangen."
    sleep 10s #Notwendig, damit hd-idle funktioniert
    hd-idle -t /dev/sdc
    echo "Festplatte /dev/sdc in Standby versetzt."
    ;;
  job-start)
    mount -t ntfs-3g /dev/sdc2 /mnt/seagate
    echo "Partition /dev/sdc2 in /mmt/seagate/ eingehangen."
    ;;
esac
 
Hi,

new member here.

I'm trying to achieve pretty much the same here. I'm using the supplied perl-script as a template. All the starting routines (powering up, verifying USB IDs, mounting...) work fine.

I'm facing the exact same issue in the "job-end" phase. I'm calling umount /mnt/USBBACKUP and get the return code of 0. After that I once again do a sanity check via mountpoint /mnt/USBBACKUP , which tells me that the directory is not a mountpoint any longer.

This works great when calling the script from a shell with the appropriate parameters.

But still: after the real backup job is done, the mountpoint is still there, though the debug statements in my script state the umounting and checking processes delivered expected results.

Did you find a solution for the issue on your end?

Regards,
Stefan
 
Is "/mnt/USBBACKUP" added to PVE as a directory storage? Because if you don't disable a storage, PVE might mount it again.
If I for example do a "zpool export MyPool" and don'T disable my ZFS storage first, within a second the ZFS pool will be auto-mounted again.
 
Yes it is, type "directory". Thank your for giving that hint, and that fast!

Is it considered "ok" to enable / disable the target storage in the hook script?

According to your explanation I just would enable the storage as a last step in the job-init-phase, after mounting the target to the OS itself. And in the job-end-phase I would first disable the storage, then sync-ing, unmounting, powering off.

Stefan
 
According to your explanation I just would enable the storage as a last step in the job-init-phase, after mounting the target to the OS itself. And in the job-end-phase I would first disable the storage, then sync-ing, unmounting, powering off.

Stefan
Jup. And don't forget to set the "is_mountpoint" option for your Directory storage. Most people should but don't do this because it can only be done using CLI. See the "pvesm set" command: https://pve.proxmox.com/pve-docs/pvesm.1.html
pvesm set <storage> [OPTIONS]
Update storage configuration.
<storage>: <string>
The storage identifier.

--is_mountpoint <string> (default = no)
Assume the given path is an externally managed mountpoint and consider the storage offline if it is not mounted. Using a boolean (yes/no) value serves as a shortcut to using the target path in this field.
 
Last edited:
I guess I'm not quite there yet..

I inserted the appropriate /usr/sbin/pvesm calls (both "disable" and "is_mountpoint"). During backup run I see the directory storage appear in the treeview in storage view. After backup run the storage disappears (as expected) from the tree view.

Problem is: in a terminal i run

watch -n 0.5 "mount | grep "BACKUP""

The mount stays there all the time (during actual backup runs). If I call the hook script from a shell, everything works as expected.

Somehow systemd interferes with mounting (according to journalctl).

I'm calling mount directly, which may seem kinda oldschool. systemd-mount seems to mount disks as well :) I'll have to investigate this a bit further.

Thanks for the input!

Stefan
 
You only need to set "pvesm YourStorage --is_mountpoint yes" once and you don't need to include that into your hook script.
"is_mountpoint" should always be set if your Directory Storage is pointing to a mountpoint (if its mounted or not).
 
Ok, I guess it is the same like putting it into /etc/pve/storage.cfg ? Got that sorted.

And by replacing the old mount / umount calls with systemd-mount UUID=xxxxx /mnt/USBBACKUP and
systemd-mount -u /mnt/USBBACKUP it seems to work, according to all logs I can get a hold on.

The rationale behind all that coding is that I eventually want to use more than one USB disk (not at the same time) and they all should work with the same PVE storage "USBBACKUP", so I don't have to touch the jobs. So in my script I have an array of valid UUIDs, and as soon as one is detected it gets mounted to /mnt/USBBACKUP and the backup jobs are happy.

Once again, thanks a lot for the support!

Stefan
 
Last edited:
Hi there,
If I have time, I will also test the sytemd_mount version. If it works, it would be nice.
I will state my current workaround:

Bash:
job-end)
    ...
    at now + 1min -f /home/disk.sh
    ...

The disk.sh script is the same as I posted back then.
It starts a timer and unmounts the disk 1min later.
I am not sure, if the pve-task has some reference to the storage, as long as the job is running, which prevents it from unmounting.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!