[SOLVED] Determine if backup is run interactively vs scheduled

keeka

Active Member
Dec 8, 2019
166
18
38
Within a vzdump hook script, how can I differentiate scheduled backups from those run interactively via the web UI?
There doesn't appear to be anything in env I might use to do this.
 
There is an open feature request for it you can CC yourself to: https://bugzilla.proxmox.com/show_bug.cgi?id=3730
Thanks. I look forward to it. So once implemented, absence of a shedule will mean we can infer it was run from the web UI or cmdline.

This was my first experience with PVE hooks. I wanted to (un)mount remote storage on demand. But not unmount if the script had been invoked via non-scheduled backup.
 
Last edited:
Thanks. I look forward to it. So once implemented, absence of a shedule will mean we can infer it was run from the web UI or cmdline.

This was my first experience with PVE hooks. I wanted to (un)mount remote storage on demand. But not unmount if the script had been invoked via non-scheduled backup.
You should be able to set the script just for the scheduled job with
Code:
pvesh set /cluster/backup/backup-f1bd3b15-737e --script /path/to/script
Of course the backup ID and path will be different in your case. You can use cat /etc/pve/jobs.cfg to find the ID.

You can still set a node-wide one for all jobs, which can do other things, but this job will then only use the specified script.
 
  • Like
Reactions: keeka
You should be able to set the script just for the scheduled job with
Code:
pvesh set /cluster/backup/backup-f1bd3b15-737e --script /path/to/script
Just what I need. I had no idea you could set script on specific jobs. Many thanks!
 
I just realised that disabling storage does not umount the NFS volume. Now I am attempting something like this in my hookscript:

Perl:
    if (($phase eq 'job-end') || ($phase eq 'job-abort')) {
        my $mountpoint = system ("pvesh get storage/$storeid --noheader 1 --noborder 1 | grep ^path | awk '{print \$2}'")
        my $fstype = system ("pvesm status | grep ^$storeid | awk '{print \$2}'")
        system ("/usr/sbin/pvesm set $storeid --disable 1") == 0 ||
            die "disabling storage $storeid failed";
        ($fstype eq 'nfs' && system ("/usr/bin/umount $mountpoint") == 0) ||
            die "umounting $mountpoint ($storeid) failed";
    }

But rather than invoking shell commands, how do I leverage PVE's Perl storage modules to
1) get mountpoint and fstype by storeid
2) disable storage by storeid?

TIA for any pointers.
 
Last edited:
The thing is, the internals could change, and you would need to ensure you are using locking to avoid concurrent access. I wouldn't recommend using those. The behavior of pvesm and pvesh is much less likely to change, we need to preserve backwards-compatibility there. For the pvesh command, I'd suggest using --output-format json and then jq to parse it. You can also get the type in the result of the pvesh command, i.e.
Code:
pvesh get /storage/$storeid --output-format json | jq -r '.path, .type'
 
  • Like
Reactions: keeka
I felt wrong invoking perl via system from within a perl script.
Incidentally after posting I just found I could do this
Code:
 my $cfg = PVE::Storage::config();
 my $fstype = $cfg->{'ids'}{$storeid}{'type'};
 my $mountpoint = $cfg->{'ids'}{$storeid}{'path'};
but haven't tested anything. However I'm going to do as you suggest. Many thanks for guidance and also making me aware of jq!
 
Last edited:
I felt wrong invoking perl via `shell` from within a perl script.
Incidentally after posting I just found I could do this
Code:
 my $cfg = PVE::Storage::config();
 my $fstype = $cfg->{'ids'}{$storeid}{'type'};
 my $mountpoint = $cfg->{'ids'}{$storeid}{'path'};
Yes, this should also be relatively stable, the basic structure hasn't been changed in years. But if we're lucky, a few years from now, the storage backend might be written in Rust, so no guarantees ;)
but haven't tested anything. However I'm going to do as you suggest. Many thanks for guidance and also making me aware of jq!
 
It seems umount, called from the hook script, has no effect. That, or the pve daemon immediately remounts it for some reason.
The system call to umount in my hookscript (job-end phase) is successful in so far as it returns success (0). However checking mount and directory listing after the job completes, shows nfs export is still mounted.

Running the same code block standalone (disable storage and umount) works as expected and I can confirm the code block is entered during hook script phase since the print statement below is visible in the backup job log.

Perl:
        system ("/usr/sbin/pvesm set $storeid --disable 1") == 0 ||
            die "disabling storage $storeid failed";
        if ($fstype eq 'nfs') {
            system ("/usr/bin/umount $mountpoint") == 0 ||
                die "umounting $fstype $mountpoint ($storeid) failed";
            print "HOOK: $phase umounted $mountpoint\n";
        }
 
Last edited:
That's strange. Disabling the storage should ensure that it is not re-activated/re-mounted by Proxmox VE services. I copied your code and tested it locally. For me, it works, even when the job is started via scheduler. It also correctly fails and logs the error if unmounting fails when the NFS is busy.

Does pvesm status indicate that the storage is disabled? Can you share your /var/log/syslog from around the time the backup ended?
 
pvesm status shows the store as disabled once job completes and I can successfully umount the NFS export (manually using umount directly or running relevant perl snippet from the hookscript).

Here are relevant parts of config and journal output as well as mounts before and after job runs:

Bash:
# /etc/pve/storage.cfg
nfs: omv-backup
        disable
        export /proxmox
        path /mnt/pve/omv-backup
        server 192.168.0.5
        content backup
        options vers=4
        prune-backups keep-daily=1,keep-weekly=1

# /etc/pve/jobs.cfg
vzdump: backup-439d7383-7106
    schedule monthly
    compress zstd
    enabled 0
    mailnotification failure
    mode snapshot
    notes-template TEST {{vmid}} {{guestname}}
    script /usr/local/bin/vzdump-hook-script.pl
    storage omv-backup
    vmid 101
  
# Before running backup, NFS storage is not mounted:
root@pve1:~# ls /mnt/pve/omv-backup/
root@pve1:~# mount | grep nfs

# journal output for backup job:
Feb 27 09:19:37 pve1 pvedaemon[2782595]: <root@pam> starting task UPID:pve1:002FED14:028C8527:63FC75A9:vzdump:101:root@pam:
Feb 27 09:19:38 pve1 pvedaemon[3140884]: INFO: starting new backup job: vzdump 101 --compress zstd --mailnotification failure --notes-template 'TEST {{vmid}} {{guestname}}' --mode snapshot --node pve1 --script /usr/local/bin/vzdump-hook-script.pl --all 0 --storage omv-backup
Feb 27 09:19:39 pve1 pvedaemon[3140884]: INFO: Starting Backup of VM 101 (lxc)
Feb 27 09:19:40 pve1 kernel: EXT4-fs (dm-25): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb 27 09:19:41 pve1 pvedaemon[3140884]: INFO: Finished Backup of VM 101 (00:00:02)
Feb 27 09:19:42 pve1 pvedaemon[3140884]: INFO: Backup job finished successfully
Feb 27 09:19:43 pve1 pvedaemon[2782595]: <root@pam> end task UPID:pve1:002FED14:028C8527:63FC75A9:vzdump:101:root@pam: OK


root@pve1:~# ls -l /mnt/pve/omv-backup/
total 32
drwxr-xr-x 2 root root 20480 Feb 27 09:19 dump
drwxr-xr-x 2 root root  4096 Feb  2  2020 images
drwxr-xr-x 2 root root  4096 Feb  2  2020 private
root@pve1:~# mount | grep nfs
192.168.0.5:/proxmox on /mnt/pve/omv-backup type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.3,local_lock=none,addr=192.168.0.5)

BTW my code in post #6 was wrong. I should have used backticks rather than system() to capture output of a shell command. Still, even if I hardcode the necessary variables, it makes no difference.

Perl:
    if (($phase eq 'job-end') || ($phase eq 'job-abort')) {
        my $mountpoint = '/mnt/pve/omv-backup';
        my $fstype = 'nfs';
        print "HOOK: storage: $fstype $mountpoint\n";

        system ("/usr/sbin/pvesm set $storeid --disable 1") == 0 ||
            die "disabling storage $storeid failed";
        if ($fstype eq 'nfs') {
            system ("/usr/bin/umount $mountpoint") == 0 ||
                die "umounting $fstype $mountpoint ($storeid) failed";
            print "HOOK: $phase umounted $mountpoint\n";
        }
    }
 
Last edited:
Well I don't think the directory ever gets unmounted. When I run the backup, whilst running another shell in the mounted fs, no errors occur. Ordinarily attempting to umount it under those circumstances should fail. I'm going to keep plugging away and may redo the hook script with bash rather than perl. I bought 'the' Perl book over 20 years ago and never finished it!
 
Last edited:
Well I don't think the directory ever gets unmounted. When I run the backup, whilst running another shell in the mounted fs, no errors occur. Ordinarily attempting to umount it under those circumstances should fail. I'm going to keep plugging away and may redo the hook script with bash rather than perl. I bought 'the' Perl book over 20 years ago and never finished it!
But you do get the HOOK: $phase umounted $mountpoint output and no error in the backup log?
 
But you do get the HOOK: $phase umounted $mountpoint output and no error in the backup log?
Yes, I do. No errors.
Code:
INFO: HOOK: job-end
INFO: HOOK-ENV: dumpdir=/mnt/pve/omv-backup/dump;storeid=omv-backup;
INFO: HOOK: storage: nfs /mnt/pve/omv-backup
INFO: HOOK: job-end umounted /mnt/pve/omv-backup
INFO: Backup job finished successfully
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!