pve-zsync question.

ozgurerdogan

Renowned Member
May 2, 2010
613
5
83
Bursa, Turkey, Turkey
When I move a vm to a new node, pve-zsync starts for a new complete snapshot instead of doing only snapshot to backup server (not proxmox backup server, normal proxmox node). And this makes lots of traffic. Is it possible to avoid that?
 
Last edited:
Hi,
how exactly did you move the VM? Is the name of the pve-zsync job the same on the new node? Please also post the output of pveversion -v.
 
Ok I install a new node and move kvm with command:
pve-zsync sync --source 1.2.3.4:124 --dest D2 --verbose

After the move, on backup server (not a pbs), I have a cron with content:
/usr/sbin/pve-zsync sync --source 1.2.3.4:124 --dest D2D3 --name 110 --maxsnap 20 --method ssh;
I then update --source ip to new node's ip.


Backupnode:
Bash:
root@backup:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-3-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-8
pve-kernel-5.13: 7.1-6
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.13.19-3-pve: 5.13.19-6
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.11.22-3-pve: 5.11.22-7
pve-kernel-5.11.22-2-pve: 5.11.22-4
pve-kernel-5.11.22-1-pve: 5.11.22-2
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-10-pve: 4.15.18-32
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-2
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-1
proxmox-backup-client: 2.1.3-1
proxmox-backup-file-restore: 2.1.3-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-5
pve-cluster: 7.1-3
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-4
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
pve-zsync: 2.2.1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

New node's:
Bash:
root@s7:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-3-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-8
pve-kernel-5.13: 7.1-6
pve-kernel-5.13.19-3-pve: 5.13.19-7
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-2
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-1
proxmox-backup-client: 2.1.3-1
proxmox-backup-file-restore: 2.1.3-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-5
pve-cluster: 7.1-3
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-4
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
pve-zsync: 2.2.1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

So when I run command, I seems like It copies whole kvm disk again instread of only snapshots.
 
Ok I install a new node and move kvm with command:
pve-zsync sync --source 1.2.3.4:124 --dest D2 --verbose

After the move, on backup server (not a pbs), I have a cron with content:
/usr/sbin/pve-zsync sync --source 1.2.3.4:124 --dest D2D3 --name 110 --maxsnap 20 --method ssh;
I then update --source ip to new node's ip.
The problem is that the first command does not copy existing snapshots, so there won't be a common snapshot between the backup server and the new node. You'd need to use zfs send -R ... | zfs recv ... for that. It's not possible with pve-zsync currently, feel free to open a feature request on the bug tracker for this.
 
Ok how about ignoring existing snapshots and let it create a new snapshot and copy it over. It wont take long anyway.
Can I delete all existing snapshot with zfs list -t snapshot -o name | grep vm-124| xargs -n 1 zfs destroy -vr
along with files in /var/lib/pve-zsync/* ?
 
Ok how about ignoring existing snapshots and let it create a new snapshot and copy it over. It wont take long anyway.
I don't think that's possible in ZFS. If the dataset already exists you can only do incremental syncs, but for that you need a common snapshot. Easiest might be to delete (or rename first, and delete later) the copy on the backup server and make a full sync.
Can I delete all existing snapshot with zfs list -t snapshot -o name | grep vm-124| xargs -n 1 zfs destroy -vr
along with files in /var/lib/pve-zsync/* ?
 
I am a bit confused. You mean I HAVE TO have a common (at least one) snapshot after I move a vm to new node TO STOP doing full sync with next pve-zsync run ?

Also how about files in /var/lib/pve-zsync/ directory? Are they in any way in subject for before running command to check snapshots?
 
I am a bit confused. You mean I HAVE TO have a common (at least one) snapshot after I move a vm to new node TO STOP doing full sync with next pve-zsync run ?
Yes, you either
  • move the disk with existing snapshots (using zfs send -R). Then you can continue syncing incrementally.
  • Or, you need to start over with a full sync, because there's no common snapshot, no incremental stream can be generated.

Also how about files in /var/lib/pve-zsync/ directory? Are they in any way in subject for before running command to check snapshots?
The config files are just copied normally via scp. They are not related to ZFS snapshots.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!