Hi there
We have now used your
pve-zsync: Flip Source and Dest in functions to so jobs can share Dest
https://git.proxmox.com/?p=pve-zsync.git;a=commitdiff;h=5d3ff0f6e4989a8cd3e22c7c72fa751b33c76291
I know, we are probably not quite using
We had to turn off the -F switch as we rotate our backups (frequent/daily/weekly/monthly) with zfs-auto-snapshot which creates intermediary snapshots with
So how come you have decided to suddenly make such a breaking change to
Best regards,
Philip
We have now used your
pve-zsync
for ZFS snapshot backups by pulling from remote hosts via SSH. This worked wonderful until your latest commit of pve-zsync 2.0-4 on master branch, namely this change:pve-zsync: Flip Source and Dest in functions to so jobs can share Dest
https://git.proxmox.com/?p=pve-zsync.git;a=commitdiff;h=5d3ff0f6e4989a8cd3e22c7c72fa751b33c76291
I know, we are probably not quite using
pve-zsync
as intended, as we had always this small patch in place:
Diff:
--- pve-zsync.DIST-OLD 2020-03-23 18:37:20.000000000 +0100
+++ pve-zsync 2020-11-27 15:27:06.810249041 +0100
@@ -980,7 +980,7 @@
push @$cmd, \'|';
push @$cmd, 'ssh', '-o', 'BatchMode=yes', "$param->{dest_user}\@$dest->{ip}", '--' if $dest->{ip};
- push @$cmd, 'zfs', 'recv', '-F', '--';
+ push @$cmd, 'zfs', 'recv', '--'; # patched by Onlime
push @$cmd, "$target";
eval {
man zfs
says:zfs receive [-Fhnsuv] [-d|-e] [-o origin=snapshot] [-o property=value] [-x property] filesystem
-F Force a rollback of the file system to the most recent snapshot before performing the receive operation. If receiving an incremental replication stream (for example, one generated by zfs send -R [-i|-I]), destroy snapshots and file systems that do not exist on the sending side.
We had to turn off the -F switch as we rotate our backups (frequent/daily/weekly/monthly) with zfs-auto-snapshot which creates intermediary snapshots with
@zfs-auto-snap_*
prefix. Those snapshots would get destroyed by zfs recv -F --
on each pve-zsync run.So how come you have decided to suddenly make such a breaking change to
pve-zsync
in latest PVE 6.3? Why does the script suddenly need to search for the latest snapshot on destination and verify if that same snapshot exists on the source? We get the following error, reproduce:
Bash:
# first run works fine
$ ./pve-zsync sync --source x.x.x.x:rpool/zfsdisks/subvol-198-disk-1 --dest dpool/zfsdisks --maxsnap 6 --name backup
# we create a snapshot on destination dataset (which in my eyes should always be allowed...)
$ zfs snapshot dpool/zfsdisks/subvol-198-disk-1@testsnap
# second run of pve-zsync then fails (worked fine in previous pve-zsync 2.0-3)
$ pve-zsync sync --source x.x.x.x:rpool/zfsdisks/subvol-198-disk-1 --dest dpool/zfsdisks --maxsnap 6 --name backup
WARN: COMMAND:
ssh root@x.x.x.x -- zfs list -rt snapshot -Ho name rpool/zfsdisks/subvol-198-disk-1@testsnap
GET ERROR:
cannot open 'rpool/zfsdisks/subvol-198-disk-1@testsnap': dataset does not exist
Job --source x.x.x.x:rpool/zfsdisks/subvol-198-disk-1 --name backup got an ERROR!!!
ERROR Message:
COMMAND:
ssh -o 'BatchMode=yes' root@x.x.x.x -- zfs send -- rpool/zfsdisks/subvol-198-disk-1@rep_backup_2020-11-27_15:18:21 | zfs recv -- dpool/zfsdisks/subvol-198-disk-1
GET ERROR:
cannot receive new filesystem stream: destination 'dpool/zfsdisks/subvol-198-disk-1' exists
must specify -F to overwrite it
Best regards,
Philip