I have a Dell R730 with 2 x Enterprise 800GB SSD's and 6 x 1.2TB 12k SAS drives.
I was planning on setting up 2 x 800GB SSD's in a ZFS mirror, 2 x 1.2TB SAS drives in a ZFS mirror and 4 x 1.2TB SAS drives in a ZFS RAID10 array. Which mirror should I install Proxmox on - the SSD's or the 1.2TB...
Timers aren’t working in the change detection.io LXC. Container installs fine and everything seems to work but the scheduled checks never fire after the first one upon saving.
I got the shell script running by defining $PATH within the shell script:
#!/bin/bash
PVE_NODE="$(hostname)"
LOG_FILE="/mnt/ext_usb_01/$PVE_NODE.log"
export PATH=$PATH
export HOME="/root"
if [ "$1" == "job-start" ]; then
printf "$(date) PVE hook script start ($PVE_NODE) -...
Again - if I run the Perl script manually from the command line and pass in job-end it runs fine with no PATH issues.
root@pve01:~# /etc/vzdump-hook.pl job-end
GUEST HOOK: job-end
Job is ending
Using config: /root/.autorestic.yml
Using lock: /root/.autorestic.lock.yml
Backing...
If I remove hard coded paths to the --restic-bin command it fails... It seems /usr/local/bin/autorestic is found without hardcoding path but when autorestic calls 'restic' it fails on PATH as well.
tail: /mnt/ext_usb_01/pve01.log: file truncated
Fri May 10 01:38:43 UTC 2024 PVE hook script...
I have re-written the script in Perl and the same thing happens:
Script:
root@pve01:~# cat /etc/vzdump-hook.pl
#!/usr/bin/perl
use strict;
use warnings;
print "GUEST HOOK: " . join(' ', @ARGV). "\n";
my $phase = shift;
my $mode = shift;
my $vmid = shift;
if ($phase eq 'job-start') {...
I have a hook script (bash shell script) that simply calls autorestic to perform an offsite backup of the locally dumped vzdump files.
If I run my hook script from the command line (and specify job-end) it runs fine and the autorestic job runs:
Thu May 9 11:45:47 PM UTC 2024 PVE hook script...
Good idea but I've already rebuilt PVE and up and running again, sort of. This new version is causing a lot of weird issues for me. Previous install ran for over a year with no issue. I just had to go and upgrade it.....
I mounted the OMV btrfs mirror with systemrescue and copied all the proxmox backups to an external USB drive. Looks like I am safe.
I did have all the backups in restic offsite, just glad I didn't have to restore from there.
I think my PVE issue is due to PCI passthru and I'd like to start up PVE to edit the cons file to stop the autostart so I can reconfigure. I (stupidly) had the GRUB timeout set to 0 and can't enter any sort of rescue mode to edit. I have tried booting with rescue USB but can't access any config...
Turns out the size show in PBS is accurate and reflects exactly the same as the filesystem reports. I removed and recreated the datastore once more but this time as an ext4 file system. Now it only shows 270MB used.
The reason I was looking at 21GB usage after creating the xfs file system was...
This all started for me because I am look gin at syncing PBS backups over a WAN VPN link and was concerned about size of backups. Obviously that is not an issue now because I have proven to myself that the encryption makes no difference. But not sure why this disk size reporting is showing 3x...
I think that what I am seeing in the UI is not accurate...
PVE:
PBS:
Showing 14.01GB used
On PBS server:
root@pbs:~# du -hd1 /mnt/datastore/backup/
2.1M /mnt/datastore/backup/.chunks
2.1M /mnt/datastore/backup/
root@pbs:~#
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.