pve-zsync vs others like znapsend or sanoid

I'm using (and contributing to) znapzend since it can handle multiple plans* (Example). pve-zsync sure is a step into the right direction. As ZFS-related things are not really PVE-centric, it's probably a good idea to contribute to this tool instead of creating an own. Perl experience is already there in the PVE team ;) If not, then get some inspiration from this tool ;)

* I do something like copying things to a local spinning disk and also to a storage server using ssh.
 
You can use zfs directly to send snapshots:
Send a snapshot to another local pool or a mounted remote filesystem
Code:
zfs send -v rpool/home@bla1  > /backup/home@bla1.snap
Do it with compression
Code:
zfs send -v rpool/home@bla1 | pigz |  > /backup/home@bla1.gz
Do it with compression an ssl
Code:
zfs send -v rpool/home@bla1 | pigz | openssl enc -aes-256-cbc -a -salt > /backup/home@bla1.gz.ssl
Do it over SSH:
Code:
zfs send -v rpool/home@bla1 | ssh otherhost "/usr/sbin/zfs receive otherpool/home@bla1"


You can also use Pools and Datasets directly, i do this way. For example: You create an Snapshot an copy only the changes to the target:
Code:
zfs snapshot rpool/home@Montag
zfs send -v rpool/home@Montag | zfs receive  backup/home
Then you some changes at the day and wie copy only changes from Monday to Thuesday:
Code:
zfs snapshot rpool/home@Dienstag
zfs send -v -i rpool/home@Montag rpool/home@Dienstag | zfs receive backup/home
I use this things for half year backups, sometimes also for monthly. But yes it is also posible for daily backups.
 
Yeah, most of these tools use this features. But I don't like to do things manually or create error prone from scripts when there are tools which can handle this (like automatic cleanup).
 
Thanks for all the feedback, I'm still reading up and will start playing around with znapzend in a few days so @RobFantini any notes would be appreciated.

What was new to me was that I can send snapshots to any mounted remote FS:
You can use zfs directly to send snapshots:
Send a snapshot to another local pool or a mounted remote filesystem
 
Thanks for all the feedback, I'm still reading up and will start playing around with znapzend in a few days so @RobFantini any notes would be appreciated.

What was new to me was that I can send snapshots to any mounted remote FS:

which operating system will you run the program from? I have notes from omnios and linux.
 
@RobFantini: its a Proxmox 4.2 installation using zfs as root so Linux it would be.

So I have been reading and seeing the advice to properly plan the layout so one can do atomic snapshots for separate data and was wondering if this is feasible:


I'd like to have snapshots of
- the main Proxmox root, just in case some installation/update/tweak goes wrong
- the KVMs and LXCs
! the ISO images, container templates and backups are not necessary to be snapshotted

So looking at how the storages look by default:
zfsstorages.png
I was wondering how to separate the "local" storage so it would not be included in a recursive snapshot of "/"
Do I just add a separate dataset like this? https://pve.proxmox.com/wiki/Storage:_ZFS#Adding_ZFS_root_file-system_as_storage_with_Plugin
 
morph027 - that link is very useful. here are probably all the steps for a pve system

Code:
sudo apt-get install build-essential checkinstall mbuffer git
cd /tmp/
git clone https://github.com/oetiker/znapzend
# git checkout 0.xx.yy
cd znapzend
packaging/checkinstall/checkinstall.sh

install
Code:
dpkg -i znapzend_0.15.7-1_amd64.deb

set up service
Code:
cp init/znapzend.service /etc/systemd/system/
systemctl enable znapzend.service
systemctl start znapzend.service
 
Last edited:
morph027 - after doing the above steps the service was not running.

from syslog:
Code:
Jun  4 07:22:53 dell1 systemd[36051]: Failed at step EXEC spawning /usr/local/bin/znapzend: No such file or directory
Jun  4 07:22:53 dell1 systemd[1]: znapzend.service: main process exited, code=exited, status=203/EXEC
Jun  4 07:22:53 dell1 systemd[1]: Unit znapzend.service entered failed state.
Jun  4 07:22:53 dell1 systemd[1]: znapzend.service start request repeated too quickly, refusing to start.
Jun  4 07:22:53 dell1 systemd[1]: Failed to start ZnapZend - ZFS Backup System.

this has wrong path: /etc/systemd/system/znapzend.service :
Code:
ExecStart=/usr/local/bin/znapzend

fixing the path worked
Code:
systemctl enable znapzend.service
systemctl start znapzend.service

# /var/log/syslog:
Jun  4 07:37:43 dell1 znapzend[834]: znapzend (PID=834) starting up ...
Jun  4 07:37:43 dell1 znapzend[834]: refreshing backup plans...
Jun  4 07:37:44 dell1 znapzend[834]: found a valid backup plan for tank/lxc/subvol-100-disk-1...
Jun  4 07:37:44 dell1 znapzend[834]: znapzend (PID=834) initialized -- resuming normal operations.
 
  • Like
Reactions: Ovidiu
HI, just thought I'd contribute a litte to the tutorial.

1. You need Unzip, so add that to prerequisite :p
2. I had some issues compiling so: If you're running Proxmox as root you need to install Sudo and add root to sudoer.

Now to my question:

- What happens with the consitensy of snaps when I move a VM from one pve node to another? And then back again?
- I'm using znapzend on my "VM store" as SRC, so it looks something like this:

Code:
root@pveC2750:~# zfs list -t snapshot
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
vol2/VM@2016-08-17-170000                    0      -   140K  -
vol2/VM@2016-08-17-180000                    0      -   140K  -
vol2/VM@2016-08-17-190000                    0      -   140K  -
vol2/VM@2016-08-17-200000                    0      -   140K  -
vol2/VM@2016-08-17-210000                    0      -   140K  -
vol2/VM@2016-08-17-220000                    0      -   140K  -
vol2/VM@2016-08-17-230000                    0      -   140K  -
vol2/VM@2016-08-18-000000                    0      -   140K  -
vol2/VM@2016-08-18-010000                    0      -   140K  -
vol2/VM@2016-08-18-020000                    0      -   140K  -
vol2/VM@2016-08-18-030000                    0      -   140K  -
vol2/VM@2016-08-18-040000                    0      -   140K  -
vol2/VM@2016-08-18-050000                    0      -   140K  -
vol2/VM@2016-08-18-060000                    0      -   140K  -
vol2/VM/vm-100-disk-1@2016-08-17-170000  54.2M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-17-180000  17.3M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-17-190000  17.3M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-17-200000  16.8M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-17-210000  16.5M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-17-220000  16.3M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-17-230000  16.2M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-18-000000  16.1M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-18-010000  14.9M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-18-020000  14.4M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-18-030000  18.4M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-18-040000  17.5M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-18-050000  18.7M      -  7.75G  -
vol2/VM/vm-100-disk-1@2016-08-18-060000  16.4M      -  7.75G  -

But znapzendztatz only liststhe below. Is that as intended? Any way to get a number of how much data my snaps are taking?

Code:
root@pveC2750:~# znapzendztatz
USED    LAST SNAPSHOT       DATASET
  93K   2016-08-18-070000   vol2/VM
 
Hi! Can you post the output of

Code:
journalctl -lu zfs-znapzend

and

Code:
zfs list -r -t snapshot # on target and source, feel free to strip confidential data!
journalctl -lu zfs-znapzend is empty (just did a delete > create rpool and restart znapzend

source: http://pastebin.com/cYvFTDJW
dst: http://pastebin.com/BPNxjiQc

plan:
Code:
root@pve-backup:~# znapzendzetup list
*** backup plan: rpool ***
dst_a           = pve:bds/zBackups/pve-backup/rpool
dst_a_plan      = 1week=>1day,1month=>1week
enabled         = on
mbuffer         = /usr/bin/mbuffer
mbuffer_size    = 1G
post_znap_cmd   = off
pre_znap_cmd    = off
recursive       = on
src             = rpool
src_plan        = 2days=>1hour
tsformat        = %d-%m-%Y-%H%M%S
zend_delay      = 0

Code:
root@pve-backup:~# znapzendztatz
USED    LAST SNAPSHOT       DATASET
    0   15-12-2016-090000   rpool
14.2K   17-11-2016-000000   pve:bds/zBackups/pve-backup/rpool
 
Ok. As the target pool is quite empty, it should be sufficient to destroy all snapshots there (not the dataset itself). If you want to preserve the data, you can rename the dataset and create a new one with the old name.

Code:
for snap in $(zfs list -H -o name -r -t snapshot bds/zBackups/pve-backup); do zfs destroy $snap; done

Then the next sync should start with a fresh copy of all snapshots.
 
  • Like
Reactions: Jero

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!