Thanks. I look forward to it. So once implemented, absence of a shedule will mean we can infer it was run from the web UI or cmdline.There is an open feature request for it you can CC yourself to: https://bugzilla.proxmox.com/show_bug.cgi?id=3730
You should be able to set the script just for the scheduled job withThanks. I look forward to it. So once implemented, absence of a shedule will mean we can infer it was run from the web UI or cmdline.
This was my first experience with PVE hooks. I wanted to (un)mount remote storage on demand. But not unmount if the script had been invoked via non-scheduled backup.
pvesh set /cluster/backup/backup-f1bd3b15-737e --script /path/to/script
cat /etc/pve/jobs.cfg
to find the ID.Just what I need. I had no idea you could set script on specific jobs. Many thanks!You should be able to set the script just for the scheduled job with
Code:pvesh set /cluster/backup/backup-f1bd3b15-737e --script /path/to/script
if (($phase eq 'job-end') || ($phase eq 'job-abort')) {
my $mountpoint = system ("pvesh get storage/$storeid --noheader 1 --noborder 1 | grep ^path | awk '{print \$2}'")
my $fstype = system ("pvesm status | grep ^$storeid | awk '{print \$2}'")
system ("/usr/sbin/pvesm set $storeid --disable 1") == 0 ||
die "disabling storage $storeid failed";
($fstype eq 'nfs' && system ("/usr/bin/umount $mountpoint") == 0) ||
die "umounting $mountpoint ($storeid) failed";
}
pvesm
and pvesh
is much less likely to change, we need to preserve backwards-compatibility there. For the pvesh
command, I'd suggest using --output-format json
and then jq
to parse it. You can also get the type in the result of the pvesh
command, i.e.pvesh get /storage/$storeid --output-format json | jq -r '.path, .type'
system
from within a perl script. my $cfg = PVE::Storage::config();
my $fstype = $cfg->{'ids'}{$storeid}{'type'};
my $mountpoint = $cfg->{'ids'}{$storeid}{'path'};
jq
!Yes, this should also be relatively stable, the basic structure hasn't been changed in years. But if we're lucky, a few years from now, the storage backend might be written in Rust, so no guaranteesI felt wrong invoking perl via `shell` from within a perl script.
Incidentally after posting I just found I could do this
Code:my $cfg = PVE::Storage::config(); my $fstype = $cfg->{'ids'}{$storeid}{'type'}; my $mountpoint = $cfg->{'ids'}{$storeid}{'path'};
but haven't tested anything. However I'm going to do as you suggest. Many thanks for guidance and also making me aware ofjq
!
mount
and directory listing after the job completes, shows nfs export is still mounted. system ("/usr/sbin/pvesm set $storeid --disable 1") == 0 ||
die "disabling storage $storeid failed";
if ($fstype eq 'nfs') {
system ("/usr/bin/umount $mountpoint") == 0 ||
die "umounting $fstype $mountpoint ($storeid) failed";
print "HOOK: $phase umounted $mountpoint\n";
}
pvesm status
indicate that the storage is disabled? Can you share your /var/log/syslog
from around the time the backup ended?pvesm status
shows the store as disabled once job completes and I can successfully umount the NFS export (manually using umount directly or running relevant perl snippet from the hookscript).# /etc/pve/storage.cfg
nfs: omv-backup
disable
export /proxmox
path /mnt/pve/omv-backup
server 192.168.0.5
content backup
options vers=4
prune-backups keep-daily=1,keep-weekly=1
# /etc/pve/jobs.cfg
vzdump: backup-439d7383-7106
schedule monthly
compress zstd
enabled 0
mailnotification failure
mode snapshot
notes-template TEST {{vmid}} {{guestname}}
script /usr/local/bin/vzdump-hook-script.pl
storage omv-backup
vmid 101
# Before running backup, NFS storage is not mounted:
root@pve1:~# ls /mnt/pve/omv-backup/
root@pve1:~# mount | grep nfs
# journal output for backup job:
Feb 27 09:19:37 pve1 pvedaemon[2782595]: <root@pam> starting task UPID:pve1:002FED14:028C8527:63FC75A9:vzdump:101:root@pam:
Feb 27 09:19:38 pve1 pvedaemon[3140884]: INFO: starting new backup job: vzdump 101 --compress zstd --mailnotification failure --notes-template 'TEST {{vmid}} {{guestname}}' --mode snapshot --node pve1 --script /usr/local/bin/vzdump-hook-script.pl --all 0 --storage omv-backup
Feb 27 09:19:39 pve1 pvedaemon[3140884]: INFO: Starting Backup of VM 101 (lxc)
Feb 27 09:19:40 pve1 kernel: EXT4-fs (dm-25): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb 27 09:19:41 pve1 pvedaemon[3140884]: INFO: Finished Backup of VM 101 (00:00:02)
Feb 27 09:19:42 pve1 pvedaemon[3140884]: INFO: Backup job finished successfully
Feb 27 09:19:43 pve1 pvedaemon[2782595]: <root@pam> end task UPID:pve1:002FED14:028C8527:63FC75A9:vzdump:101:root@pam: OK
root@pve1:~# ls -l /mnt/pve/omv-backup/
total 32
drwxr-xr-x 2 root root 20480 Feb 27 09:19 dump
drwxr-xr-x 2 root root 4096 Feb 2 2020 images
drwxr-xr-x 2 root root 4096 Feb 2 2020 private
root@pve1:~# mount | grep nfs
192.168.0.5:/proxmox on /mnt/pve/omv-backup type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.3,local_lock=none,addr=192.168.0.5)
if (($phase eq 'job-end') || ($phase eq 'job-abort')) {
my $mountpoint = '/mnt/pve/omv-backup';
my $fstype = 'nfs';
print "HOOK: storage: $fstype $mountpoint\n";
system ("/usr/sbin/pvesm set $storeid --disable 1") == 0 ||
die "disabling storage $storeid failed";
if ($fstype eq 'nfs') {
system ("/usr/bin/umount $mountpoint") == 0 ||
die "umounting $fstype $mountpoint ($storeid) failed";
print "HOOK: $phase umounted $mountpoint\n";
}
}
But you do get theWell I don't think the directory ever gets unmounted. When I run the backup, whilst running another shell in the mounted fs, no errors occur. Ordinarily attempting to umount it under those circumstances should fail. I'm going to keep plugging away and may redo the hook script with bash rather than perl. I bought 'the' Perl book over 20 years ago and never finished it!
HOOK: $phase umounted $mountpoint
output and no error in the backup log?Yes, I do. No errors.But you do get theHOOK: $phase umounted $mountpoint
output and no error in the backup log?
INFO: HOOK: job-end
INFO: HOOK-ENV: dumpdir=/mnt/pve/omv-backup/dump;storeid=omv-backup;
INFO: HOOK: storage: nfs /mnt/pve/omv-backup
INFO: HOOK: job-end umounted /mnt/pve/omv-backup
INFO: Backup job finished successfully
not yet. The first step is to have that information available in the vzdump invocation for which a patch was proposed: https://lists.proxmox.com/pipermail/pve-devel/2024-April/063537.htmlJust checking in to see if the situation has changed since this thread began. Are we in a position yet where Proxmox provides the required information to a hookscript to allow it to distinguish between a scheduled backup and a manually started one?
Thanks Fiona. How long would you expect it to take for this to be released? Are we talking this year, would you say?Hi,
not yet. The first step is to have that information available in the vzdump invocation for which a patch was proposed: https://lists.proxmox.com/pipermail/pve-devel/2024-April/063537.html
pvesh set
I mean, you can set your hook script to all backup jobs individually, but not as a node-wide default. Then you also know that the script is only called for backup jobs and not manual invocations (to be clear, it also will be forThanks Fiona. How long would you expect it to take for this to be released? Are we talking this year, would you say?
Is the best workaround for now just to useas you suggest above?Code:pvesh set
Run now
of the job, but not Backup now
of individual guests).I didn’t realise this. How would I set my script to a single backup job and not as a node-wide default?I mean, you can set your hook script to all backup jobs individually, but not as a node-wide default. Then you also know that the script is only called for backup jobs and not manual invocations (to be clear, it also will be forRun now
of the job, but notBackup now
of individual guests).