Bash script that keeps track of zfs snapshots

OUR.INFRA.ROCKS

New Member
Oct 5, 2016
5
0
1
44
Oh boy oh bo oh boy when i tried the daily, weekly, monthly script posted somewhere here my backup server just died. It was using quadruple data as before. Dedup just made performance way worse and stayed at 1 ratio making it useless. I expected dedup to be the solution for elevating some of the pain in that script. Well no.

I really sat for a few hours each day thinking of a simple solution frustrated I couldn't think of one in ten minutes since the subject seems easy enough. Anyway had a micro epiphany and wrote a bash script which I've only done a few times. So please give suggestions but don't be a dick.

I run pve-zsync like this on my local server
  • */15 * * * * pve-zsync sync --source 100 --dest XXX.XXX.XXX.XXX:rpool/data --name default --maxsnap 4 --method ssh
On the local and backup server you will schedule the script. This is what happens.

Every x interval (you choose) all zfs snapshots are parsed per filesystem for their count and creation time.

So for daily snapshot
if count == 0 or creation time > now() - 86400 make a daily snapshot
if now exists two daily shapshots delete the oldest
if exists do nothing​

So for weekly snapshot
if count == 0 or creation time > now() - 604800 make a weekly snapshot
if now exists two weekly shapshots delete the oldest
if exists do nothing​

I'm running this logic for a week and it has saved me lots of space but still have good restore capabilities.
Curious what others think about this approach and pitfalls I might not have seen.

I see some people runnning pve-zsync with 100 maxsnaps. I don't understand why, combine it with daily, weekly and monthly it becomes so expensive you need to find yourself a rich girlffriend.

The script has fours parts
daily snapshotting everything under rpool/data
daily snapshotting everything under ssd
monthly snapshotting everything under rpool/data
monthly snapshotting everything under ssd​
So please edit as needed before running.

I have attached the output of my zfs volumes and snapshots.

By doing it this way you only have 1 incremental sync running per filesystem to your backup, rest is done locally. You have 1 weekly, 1 daily and 4x 15 min snapshots using a lot less resources.

Code:
mailbody="DAILY CHECK<br><br>"
for f in /rpool/data/sub*; do
    list=$(zfs list -r $f -t snap | grep daily)
    mailbody="$mailbody found $f<br>"
        if [ "$list" == "" ]; then
            mailbody="$mailbody no daily created yet for this dataset<br>"
            zfs snapshot ${f#?}@daily
            mailbody="$mailbody daily snapshot created<br>"
        else
            mailbody="$mailbody daily snapshot exists and is recent<br>"
            snap=$(zfs get -H -o value creation `zfs list -H -o name -t snapshot | grep ${f#?}@daily`)
            date1=$(date -d "$snap" +%s)
            date2=$(date +%s)
            COUNT=`expr $date2 - $date1`
        fi
        if [ -z "$1" ] && [ "$COUNT" -gt "25200" ]; then
            mailbody="$mailbody but is too old: $date1 versus $date2 = $COUNT<br>"
            mailbody="$mailbody snapshot too old<br>"
            zfs snapshot ${f#?}@daily2
            mailbody="$mailbody daily snapshot created<br>"
            zfs destroy ${f#?}@daily
            mailbody="$mailbody deleted old snapshot<br>"
            zfs rename ${f#?}@daily2 ${f#?}@daily
            mailbody="$mailbody renamed new to old<br>"
        fi
        mailbody="$mailbody<br>"
done
for f in /ssd/sub*; do
    list=$(zfs list -r $f -t snap | grep daily)
    mailbody="$mailbody found $f<br>"
        if [ "$list" == "" ]; then
            mailbody="$mailbody no daily created yet for this dataset<br>"
            zfs snapshot ${f#?}@daily
            mailbody="$mailbody daily snapshot created<br>"
        else
            mailbody="$mailbody daily snapshot exists and is recent<br>"
            snap=$(zfs get -H -o value creation `zfs list -H -o name -t snapshot | grep ${f#?}@daily`)
            date1=$(date -d "$snap" +%s)
            date2=$(date +%s)
            COUNT=`expr $date2 - $date1`
        fi
        if [ -z "$1" ] && [ "$COUNT" -gt "25200" ]; then
            mailbody="$mailbody but is too old: $date1 versus $date2 = $COUNT<br>"
            mailbody="$mailbody snapshot too old<br>"
            zfs snapshot ${f#?}@daily2
            mailbody="$mailbody daily snapshot created<br>"
            zfs destroy ${f#?}@daily
            mailbody="$mailbody deleted old snapshot<br>"
            zfs rename ${f#?}@daily2 ${f#?}@daily
            mailbody="$mailbody renamed new to old<br>"
        fi
        mailbody="$mailbody<br>"
done
mailbody="$mailbody WEEKLY<br>"
    for f in /rpool/data/sub*; do
            list=$(zfs list -r $f -t snap | grep weekly)
            mailbody="$mailbody found $f<br>"
        if [ "$list" == "" ]; then
            mailbody="$mailbody no weekly created yet for this dataset<br>"
            zfs snapshot ${f#?}@weekly
            mailbody="$mailbody weekly snapshot created<br>"
        else
            mailbody="$mailbody weekly snapshot exists and is recent<br>"
            snap=$(zfs get -H -o value creation `zfs list -H -o name -t snapshot | grep ${f#?}@weekly`)
            date1=$(date -d "$snap" +%s)
            date2=$(date +%s)
            COUNT=`expr $date2 - $date1`
    fi
         if [ -z "$COUNT" ] || [ "$COUNT" -gt "604800" ]; then
            mailbody="$mailbody but is too old $date1 versus $date2 = $COUNT<br>"
            mailbody="$mailbody snapshot too old<br>"
            zfs snapshot ${f#?}@weekly2
            mailbody="$mailbody weekly snapshot created<br>"
            zfs destroy ${f#?}@weekly
            mailbody="$mailbody deleted old snapshot<br>"
            zfs rename ${f#?}@weekly2 ${f#?}@weekly
            mailbody="$mailbody renamed new to old<br>"
    fi
        mailbody="$mailbody <br>"
done
    for f in /ssd/sub*; do
            list=$(zfs list -r $f -t snap | grep weekly)
            mailbody="$mailbody found $f<br>"
        if [ "$list" == "" ]; then
            mailbody="$mailbody no weekly created yet for this dataset<br>"
            zfs snapshot ${f#?}@weekly
            mailbody="$mailbody weekly snapshot created<br>"
        else
            mailbody="$mailbody weekly snapshot exists and is recent<br>"
            snap=$(zfs get -H -o value creation `zfs list -H -o name -t snapshot | grep ${f#?}@weekly`)
            date1=$(date -d "$snap" +%s)
            date2=$(date +%s)
            COUNT=`expr $date2 - $date1`
    fi
         if [ -z "$COUNT" ] || [ "$COUNT" -gt "604800" ]; then
            mailbody="$mailbody but is too old $date1 versus $date2 = $COUNT<br>"
            mailbody="$mailbody snapshot too old<br>"
            zfs snapshot ${f#?}@weekly2
            mailbody="$mailbody weekly snapshot created<br>"
            zfs destroy ${f#?}@weekly
            mailbody="$mailbody deleted old snapshot<br>"
            zfs rename ${f#?}@weekly2 ${f#?}@weekly
            mailbody="$mailbody renamed new to old<br>"
    fi
        mailbody="$mailbody <br>"
done
echo $mailbody | mail -a 'Content-Type: text/html' -s "Backup Snapshot Script Ran" "root@localhost"
 

Attachments

Last edited:
Todo
Check the age of the filesystem. We do not want weekly backup's on recent machines.
Before daily and weekly snapshottting on the backup machine check if the filesystem has been synced atleast once.
I will rewrite the script later to make it easier to maintain. It's something I had to rush.​
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!