pve zfs syncing removes all other snapshots on destination

mailinglists

Renowned Member
Mar 14, 2012
641
69
93
Is there any reason why syncing would have to remove all snapshots on the destination?
Should this be considered a bug?
This is done by pvesr (GUI synchronization).
Would pve-zsync behave differently in pull method?

See:
Code:
root@p27:~# zfs list -t all | grep 100
rpool/data/vm-100-disk-1                                 2.41G  5.23T  2.41G  -
rpool/data/vm-100-disk-1@__replicate_100-0_1528282804__     0B      -  2.41G  -
root@p27:~# zfs snapshot rpool/data/vm-100-disk-1@test
root@p27:~# zfs list -t all | grep 100
rpool/data/vm-100-disk-1                                 2.41G  5.23T  2.41G  -
rpool/data/vm-100-disk-1@__replicate_100-0_1528282804__     0B      -  2.41G  -
rpool/data/vm-100-disk-1@test                               0B      -  2.41G  -
root@p27:~# #sync run from source node
root@p27:~# zfs list -t all | grep 100
rpool/data/vm-100-disk-1                                 2.41G  5.23T  2.41G  -
rpool/data/vm-100-disk-1@__replicate_100-0_1528283161__     0B      -  2.41G  -
 
Is there any reason why syncing would have to remove all snapshots on the destination?
Yes, we must use the -F (force rollback) operator on the destination side.
The -F operator removes all snapshots after the last synced one on the dest side.
This is necessary because other way zfs will not sync and you get an error.
It is not possible to make on the receiver side snapshots.
 
Thank you for your answer.
I will test pve-zsync with pull method to see if it removes the snapshots on the receiving end there.

Is there any reason not to sync to default rpool/data dataset? p27 is a member of a cluster.
(pve-zsync sync --source 10.31.1.26:100 --dest rpool/data --verbose --maxsnap 5 --name test --limit 30000)
 
Yes, it is very confusing and has no benefit.
You should you a subset as logical separation.
like rpool/sync
 
I see that pve-zsync also removes my snapshots.
Is this also necessary? It seems like it can keep it's snapshots, so in theory it could also leave mine alone.
Code:
root@p27:~# zfs snapshot rpool/data/vm-100-disk-1@testeeeeeee
root@p27:~# zfs list -t all | grep 100
rpool/data/vm-100-disk-1                               2.41G  5.23T  2.41G  -
rpool/data/vm-100-disk-1@rep_test_2018-06-06_15:26:11     0B      -  2.41G  -
rpool/data/vm-100-disk-1@rep_test_2018-06-06_15:28:18     0B      -  2.41G  -
rpool/data/vm-100-disk-1@rep_test_2018-06-06_15:29:12     0B      -  2.41G  -
rpool/data/vm-100-disk-1@testeeeeeee                      0B      -  2.41G  -
root@p27:~# pve-zsync sync --source 10.31.1.26:100 --dest rpool/data --verbose --maxsnap 5 --name test --limit 30000
send from @rep_test_2018-06-06_15:29:12 to rpool/data/vm-100-disk-1@rep_test_2018-06-06_15:30:39 estimated size is 624B
total estimated size is 624B
TIME        SENT   SNAPSHOT
root@p27:~# zfs list -t all | grep 100
rpool/data/vm-100-disk-1                               2.41G  5.23T  2.41G  -
rpool/data/vm-100-disk-1@rep_test_2018-06-06_15:26:11     0B      -  2.41G  -
rpool/data/vm-100-disk-1@rep_test_2018-06-06_15:28:18     0B      -  2.41G  -
rpool/data/vm-100-disk-1@rep_test_2018-06-06_15:29:12     0B      -  2.41G  -
rpool/data/vm-100-disk-1@rep_test_2018-06-06_15:30:39     0B      -  2.41G  -
 
Yes, it is very confusing and has no benefit.
You should you a subset as logical separation.
like rpool/sync

Tnx, will do.
I think I can create a dataset for each of backups, like weekly, monthly, .. and then create pve-zsync jonbs and backup is perfect without any other scripts. :-)
 
Sadly one can not have multiple pve-zsync jobs from the same source to different destinations with different time frames. :-(
So again this is not an full sollution for backups. I guess I will create daily backups for up to XY days and then manually / scriptly create monthly or weekly from those directly on backup server. That could probably work.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!