Can I create two pve-zsync tasks?

ozgurerdogan

Renowned Member
May 2, 2010
621
6
83
Bursa, Turkey, Turkey
I want to take snapshos of zfs weekly and daily. So can I create two cron task like:
/usr/sbin/pve-zsync sync --source 1.2.3.4:100 --dest D3 --name 101DAILY --maxsnap 10 --method ssh

and

/usr/sbin/pve-zsync sync --source 1.2.3.4:100 --dest D3 --name 101WEEK --maxsnap 10 --method ssh
 
Hi,
yes you can. But as far as I can see you have to manually edit the intervals afterwards. The relevant file is '/etc/cron.d/pve-zsync' and here you can learn about the time format 'cron' uses.
 
I think it is not with crons because one job which is daily will eares weekly one. So I meant to create two seperate sync jobs. But this time I will also (I think) need to create two dataset for each jobs.?
 
They won't erase each other. Each snapshot contains the name of the pve-zsync job. You can't create multiple jobs with the same source, destination and name.
 
Hi,

I think you could create 2 diferent tasks with pve-zsync for the same surce but using different destinations on the same destination host but using 2 different datasets.

Good luck!
 
I tested it and there is indeed a problem. When checking if older snapshots exist, pve-zsync ignores those which do not match the job name. So after
Code:
# pve-zsync create --source 192.168.20.130:105 --dest myzpool/backup --name test1 --maxsnap 2 --method ssh --source-user root --dest-user root

# pve-zsync create --source 192.168.20.130:105 --dest myzpool/backup --name test2 --maxsnap 2 --method ssh --source-user root --dest-user root
it will fail because it is trying to do a full sync:
Code:
ERROR Message:
COMMAND:
        ssh -o 'BatchMode=yes' root@192.168.20.130 -- zfs send -- myzpool/vm-105-disk-0@rep_test2_2019-09-19_09:50:03 | zfs recv -F -- myzpool/backup/vm-105-disk-0
GET ERROR:
        cannot receive new filesystem stream: destination has snapshots (eg. myzpool/backup/vm-105-disk-0@rep_test1_2019-09-19_09:49:22)
must destroy them to overwrite it



The workaround @guletz mentioned is one way to do it and the cleanest way considering what pve-zsync is currently able to handle.

Another thing that might work would be manually adding a snapshot @rep_101WEEK_<TIME> on the receive side. Haven't tested it though.

WARNING!!: this third way is very likely to produce some other conflict in pve-zsync, so use it at your own risk: making two cron jobs with the same '--name' but different schedules.
 
I was able to create two task so :

Code:
root@backup:~# /usr/sbin/pve-zsync sync --source 1.2.3.4:101 --dest D3 --name 101DAILY --maxsnap 10 --method ssh
root@backup:~# /usr/sbin/pve-zsync sync --source 1.2.3.4:101 --dest D4 --name 101WEEKLY --maxsnap 10 --method ssh

Then I was able to see:
Code:
root@backup:~# zfs list -t snapshot -o name | grep vm-101
D3/vm-101-disk-1@rep_101DAILY_2019-09-19_10:28:24
D3/vm-101-disk-1@rep_101DAILY_2019-09-19_10:51:13
D3/vm-101-disk-1@rep_101DAILY_2019-09-19_11:23:27
D4/vm-101-disk-1@rep_101WEEKLY_2019-09-19_10:40:21
D4/vm-101-disk-1@rep_101WEEKLY_2019-09-19_10:51:20
D4/vm-101-disk-1@rep_101WEEKLY_2019-09-19_11:23:40

Thank you.
 
Hi,

As a side note, I is highly recomended if you want to use any kind of backup, that the backup MUST be initiated by the buckup system and NOT by the source(security reson only - think that if your source is hacked, and the intouder can delete ALL your cron tasks, and the remote backups) !!!!!
 
  • Like
Reactions: ozgurerdogan
I am taking backup from a different datacenter / country. And initiate from remote backup server only. But good point to remind.

... maybe also a time base firewall(open firewall 5 min before your task must be start for new ssh connection + statefull fw) for this task could be nice to have ;)
 
Better than that, I am only allowing traffic only between nodes and from my pc. And my pc has dynamic ip so I update it with following bash to update ip in rules in case of ip change:

updateip.sh

Code:
# crontab -e : */5 * * * * sh /updateip.sh
# /etc/init.d/cron restart


HOSTNAME=myoffice.ddnsfree.com
NODENAME=s3
CURRENTIP=/currentip.log

Current_IP=$(host $HOSTNAME | cut -f4 -d' ')

if [ $CURRENTIP = "" ] ; then
    sed -i '/# Dynu/c\IN ACCEPT -source '"$Current_IP"' # Dynu ' /etc/pve/nodes/$NODENAME/host.fw
  echo $Current_IP > $CURRENTIP
else

  Old_IP=$(cat $CURRENTIP)

  if [ "$Current_IP" = "$Old_IP" ] ; then
    echo IP address has not changed
  else
    sed -i '/# Dynu/c\IN ACCEPT -source '"$Current_IP"' # Dynu ' /etc/pve/nodes/$NODENAME/host.fw
    echo $Current_IP > $CURRENTIP
    echo iptables have been updated
    pve-firewall stop && pve-firewall start
  fi
fi
 
  • Like
Reactions: guletz