Real pve-zsync examples

melanch0lia

New Member
Jul 31, 2014
25
0
1
Hello,

What is pve-zsync usage in real world? Can somebody show off real scheme of usage between two hosts, including crontab?
 
this is copied and pasted from some pve-zsync notes

*on target system create zfs
Code:
# make zfs
zfs create tank/pve-zsync/Daily
zfs create tank/pve-zsync/Weekly
zfs create tank/pve-zsync/Monthly
zfs create tank/pve-zsync/15Minutes

*setup backup of a remote vm
Code:
# imap 2016-03-27
pve-zsync create  --source 10.2.2.42:105  --name  imap-15min  --maxsnap  96 --dest  tank/pve-zsync/15Minutes
pve-zsync create  --source 10.2.2.42:105  --name  imap-daily  --maxsnap  7  --dest  tank/pve-zsync/Daily  -skip
pve-zsync create  --source 10.2.2.42:105  --name  imap-weekly  --maxsnap  4  --dest  tank/pve-zsync/Weekly  -skip
pve-zsync create  --source 10.2.2.42:105  --name  imap-monthly --maxsnap  12 --dest  tank/pve-zsync/Monthly  -skip

*local
Code:
pve-zsync  create  --source 7596 --name bc-sys2-daily  --maxsnap  7  --dest  tank/pve-zsync/Daily  
pve-zsync  create  --source 7596 --name bc-sys2-weekly  --maxsnap  4  --dest  tank/pve-zsync/Weekly
pve-zsync  create  --source 7596 --name bc-sys2-monthly --maxsnap  12 --dest  tank/pve-zsync/Monthly

*example crontab. note the jobs are for a different vm then above examples.
Note you'll need to adjust the times to run .
Code:
# .--------------- minute (0 - 59)
#/  .----------- hour (0 - 23)
#  |  .------- day of month (1 - 31)
#  |  | .----- month (1 - 12) OR jan,feb,mar,apr ...
#  |  | | .- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
#  |  | | | user command to be executed
#  |  | | | |  |
*/15 7-20 * * * root pve-zsync sync --source 10.2.2.42:4444 --dest tank/pve-zsync/15Minutes --name pro4-15min  --maxsnap 96 --method ssh
15  23 * * * root pve-zsync sync --source 10.2.2.42:4444 --dest tank/pve-zsync/Daily  --name pro4-daily  --maxsnap 7  --method ssh
15  23 * * 6 root pve-zsync sync --source 10.2.2.42:4444 --dest tank/pve-zsync/Weekly  --name pro4-weekly  --maxsnap 4  --method ssh
15  23 1 * * root pve-zsync sync --source 10.2.2.42:4444 --dest tank/pve-zsync/Monthly  --name pro4-monthly --maxsnap 12 --method ssh
 
Also - for controlling when the pve-zyncs get done , so as to not over load systems, I found it best to run from one crontab on one host. we had issues with cluster when multiple jobs were running at the same time from different hosts.

pve-zsync seems to have some kind of queue, and runs one job at a time .
 
pve-zsync is still under development so there will probably be an official page when it gets close to being on the pve web gui. In the mean time anyone can add to the wiki.

I used pve-zsync for about a year. After switching our kvm storage from local to napp-it/iscsi we are now using znapzend on omnios . I've only used it a couple of weeks - it is working great so far.

"Will pve-zsync create snapshot if it will not reach other host/fail to replicate?" I am not sure. I did have an issue when i restarted the target in the middle of a send. a snapshot would need to get deleted . That happened a few times.

I've a question: what happens if you reboot a send or receive host during the middle of a zfs-auto-snapshot ? Does it auto recover?

"Should I keep only one snapshotter?" I think that is a good idea. So if I screw up the configuration on one, the other would hopefully still be working.

PS:
Check out znapzend. http://www.znapzend.org/
https://github.com/oetiker/znapzend/blob/master/README.md

I like these:
"ZnapZend stores all its configuration in custom ZFS properties. So all configuration is hooked right into the ZFS fileset structure. When you modify the fileset, rename it, remove it, ZnapZend will happily chug along. ZnapZend comes with a special tool for setting and checking the backup configuration."
Have not tested, but it seems like ZnapZend will just work if the zfs is migrated to another system running ZnapZend. TBD

" You can configure any number of remote destinations for a fileset."

I think it is something that pve-zync could use since : "ZnapZend uses the built-in snapshot functionality of ZFS for fully consistent backups. For each fileset, a pre- and post-snapshot command can be configured"
So a command to send the Proxmox configuration file for the snapshot can be dealt with.
 
pve-zsync is still under development so there will probably be an official page when it gets close to being on the pve web gui. In the mean time anyone can add to the wiki.

I used pve-zsync for about a year. After switching our kvm storage from local to napp-it/iscsi we are now using znapzend on omnios . I've only used it a couple of weeks - it is working great so far.

"Will pve-zsync create snapshot if it will not reach other host/fail to replicate?" I am not sure. I did have an issue when i restarted the target in the middle of a send. a snapshot would need to get deleted . That happened a few times.

I've a question: what happens if you reboot a send or receive host during the middle of a zfs-auto-snapshot ? Does it auto recover?

"Should I keep only one snapshotter?" I think that is a good idea. So if I screw up the configuration on one, the other would hopefully still be working.

PS:
Check out znapzend. http://www.znapzend.org/
https://github.com/oetiker/znapzend/blob/master/README.md

I like these:
"ZnapZend stores all its configuration in custom ZFS properties. So all configuration is hooked right into the ZFS fileset structure. When you modify the fileset, rename it, remove it, ZnapZend will happily chug along. ZnapZend comes with a special tool for setting and checking the backup configuration."
Have not tested, but it seems like ZnapZend will just work if the zfs is migrated to another system running ZnapZend. TBD

" You can configure any number of remote destinations for a fileset."

I think it is something that pve-zync could use since : "ZnapZend uses the built-in snapshot functionality of ZFS for fully consistent backups. For each fileset, a pre- and post-snapshot command can be configured"
So a command to send the Proxmox configuration file for the snapshot can be dealt with.

Hello,

I'm interested too about your solution to use a napp-it znapzend to do snaphot and to replicate it.Is it your backup plan ?
I thinking of doing a zfs storage with omnios et napp-it.present it as an iscsi target to my pve nodes and do lvm on this.So that way we get flexibility of a shared storage doing live migration,and doing snapshot replication a another zfs storage as an offsite backup plan.Do you think it is a way to go ?is there an impact on perfs ?

Thanks
 
we had issues running kvm on iscsi . slowly the performance would slow down .

I do not know the cause of the slow down. It could have been some not ideal configuration choices on my part.

when i say performance was slow - apt updates would take at least 10 times longer on iscsi compared to local zfs storage. note after reboot the upgrades got done fast.

the author of napp-it suggested we take measurements at omnios reboot then as I noticed kvm slowdowns. the measurements were cli and using aja on windows. those measurements were good when kvm was sluggish.

the shared storage was nice to have, however it was still a single point of failure.

so we are back to using local zfs storage .
 
we had issues running kvm on iscsi . slowly the performance would slow down .

I do not know the cause of the slow down. It could have been some not ideal configuration choices on my part.

when i say performance was slow - apt updates would take at least 10 times longer on iscsi compared to local zfs storage. note after reboot the upgrades got done fast.

the author of napp-it suggested we take measurements at omnios reboot then as I noticed kvm slowdowns. the measurements were cli and using aja on windows. those measurements were good when kvm was sluggish.

the shared storage was nice to have, however it was still a single point of failure.

so we are back to using local zfs storage .

If i understand you now use ZFS local storage of your proxmox nodes?So you are using pve-zsync to backup?what is your recovery plan if your proxmox node goes down?

"the shared storage was nice to have, however it was still a single point of failure."

How are you doing to no have single point of faillure?

Best regards,
 
If i understand you now use ZFS local storage of your proxmox nodes?So you are using pve-zsync to backup?what is your recovery plan if your proxmox node goes down?

"the shared storage was nice to have, however it was still a single point of failure."

How are you doing to no have single point of faillure?

Best regards,

Yes we are using pve-zsync .

most of our virtual machines are appliances . weekly vzdump works with those.

for data vm's we run a few types of backup .
- pve-zsync every 15 minutes
- rsync for data directories . data is sent on an off site every 30 minutes.
- obnam

so recovery is manual . usually just restore a backup and rsync a data backup. or restore mysql .

for shared storage I might test sheepdog next. last I checked [ a few years ago ] is seemed to be easy to set up and maintain.
 
Thanks for your advices,

So if i understand you are doing pve-zsync to do all your data vm backup and just to be sure you have a second recovery solution you do a rsync of data in case of things go wrong with snapshot replication?

Best regards,
 
Thanks for your advices,

So if i understand you are doing pve-zsync to do all your data vm backup and just to be sure you have a second recovery solution you do a rsync of data in case of things go wrong with snapshot replication?

Best regards,

Yes that is correct.

PS: I need to practice more on pve-zsync restores.
 
old but interesting topic...
one thing I don't get with your solution is that If I create 4 dataset (15min / daily / weekly / monthly), the first backup in each dataset will be a full copy of the VM and then I'll have a rotation of snapshot. Am I correct ?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!