pve-zsync error: destination has snapshots and daylight saving

did-vmonroig

Renowned Member
Aug 14, 2013
15
0
66
Hi.

I use pve-zsync as backup solution between servers, sometimes with a periodicity of less than an hour. From time to time, this error arises:

Code:
cannot receive new filesystem stream: destination has snapshots (eg. rpool/data/vm-1106-disk-0@rep_mv06-servidor16_2023-10-25_17:00:01)
must destroy them to overwrite it

Only solution I've found is deleting all snapshots in destination server, so losing all backups.

Seems that is related to duplicated snapshots and it has happened again since this Sunday at 3:00 AM, so it seems related to daylight saving, producing new snapshots with exact same name. Could this be reported as bug?

Regards,
 
Hello everyone,

Have you found a solution for this problem? I have the same problem.

It's very strange because I have 3 vm in sync, 2 works very well but not the 3rd.

When i do the first sync job !
Rich (BB code):
WARN: COMMAND:
    zfs list -rt snapshot -Ho name zmarina/data/vm-110-disk-0@rep_srv-2019-station-new_2024-04-16_20:00:26
GET ERROR:
    cannot open 'zmarina/data/vm-110-disk-0@rep_srv-2019-station-new_2024-04-16_20:00:26': dataset does not exist
COMMAND:
    zfs send -- zmarina/data/vm-110-disk-0@rep_srv-2019-station-new_2024-04-17_15:40:02 | ssh -o 'BatchMode=yes' root@ip -- zfs recv -F -- zbacksaepp/pve-zsync01/vm-110-disk-0
GET ERROR:
    cannot receive new filesystem stream: destination has snapshots (eg. zbacksaepp/pve-zsync01/vm-110-disk-0@rep_srv-2019-station-new_2024-04-16_20:00:26)
must destroy them to overwrite it

Job --source 110 --name srv-2019-station-new got an ERROR!!!
ERROR Message:

Indeed the snapshot has been copied to the dest and deleted from the source. Then the 2nd snapshot is not created and synchronized. Then there's no more incrementals possible, even though I should be keeping the snapshots on both sides.
If I delete the destination snapshot, I'll get the same error!

Source:
Rich (BB code):
zfs list -t snapshot zmarina/data/vm-110-disk-0
no datasets available

Dest:
Code:
zfs list -t snapshot zbacksaepp/pve-zsync01/vm-110-disk-0
NAME                                                                                USED  AVAIL     REFER  MOUNTPOINT
zbacksaepp/pve-zsync01/vm-110-disk-0@rep_srv-2019-station-new_2024-04-16_20:00:26     0B      -     42.5G  -

Rich (BB code):
 pve-zsync list
SOURCE                   NAME                     STATE     LAST SYNC           TYPE  CON
100                      srv-2019-rds-ad          ok        2024-04-17_19:45:01 qemu  ssh
101                      srv-alize                ok        2024-04-17_19:45:21 qemu  ssh
110                      srv-2019-station-new     error     0                   qemu  ssh

Rich (BB code):
*/15 * * * * root pve-zsync sync --source 101 --dest ip:zbacksaepp/pve-zsync --name srv-alize --maxsnap 24 --method ssh --source-user root --dest-user root
*/15 * * * * root pve-zsync sync --source 100 --dest ip:zbacksaepp/pve-zsync --name srv-2019-rds-ad --maxsnap 24 --method ssh --source-user root --dest-user root
*/20 * * * * root pve-zsync sync --source 110 --dest ip:zbacksaepp/pve-zsync01 --name srv-2019-station-new --maxsnap 24 --method ssh --source-user root --dest-user root

Do you have any idea about this situation?

Best Regards
 
Last edited:
Hi,
please share the VM configuration qm config 110 and output of pveversion -v. Does your VM maybe have multiple disks with the same name on different storages (i.e. both called vm-110-disk-0)? If, yes, there is a --prepend-storage-id flag you can use.
 
Hi Fiona,

Code:
# qm config 110
agent: 1
boot: order=virtio0
cores: 4
cpu: host
machine: pc-i440fx-6.0
memory: 10240
name: srv-2019-station-new
net0: virtio=A6:27:DF:46:F9:2F,bridge=vmbr2,firewall=1
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-single
smbios1: uuid=c3c10596-6172-4c30-ac98-3f52fe884cfa
sockets: 2
startup: order=3,up=10
tags: srv-2019-station-new
vga: memory=128
ide0: zfs-vm:vm-110-disk-0,discard=on,size=80G
virtio0: zfs-vm:vm-110-disk-0,discard=on,size=80G
ide2: zfs-isos:iso/SERVER_EVAL_x64FR_fr_2019.iso,media=cdrom,size=4874748K
vmgenid: 79e7e3eb-e130-447b-8cd2-c8951d25bcd9

In fact, I had multiple disks with the same name because I had problems with windows and added an ide controller to make a chkdsk and other things.

Code:
pve-zsync list
SOURCE                   NAME                     STATE     LAST SYNC           TYPE  CON 
100                      srv-2019-rds-ad          ok        2024-04-21_05:52:01 qemu  ssh 
101                      srv-alize                ok        2024-04-21_05:56:01 qemu  ssh 
105                      srv-2019-sage            ok        2024-04-21_05:45:01 qemu  ssh 
110                      srv-2019-station-new     ok        2024-04-21_05:48:01 qemu  ssh

By the way, I have a vm that no longer wants to boot on the virtio controller but only on ide, I'll make a post for that when I've done all the tests.
Thank you very much for your support which solved my problem and showed me something important!

Best Regards
 
Code:
ide0: zfs-vm:vm-110-disk-0,discard=on,size=80G
virtio0: zfs-vm:vm-110-disk-0,discard=on,size=80G

In fact, I had multiple disks with the same name because I had problems with windows and added an ide controller to make a chkdsk and other things.
Yes, pve-zsync does not expect this. Glad you were able to solve your issue :)
By the way, I have a vm that no longer wants to boot on the virtio controller but only on ide, I'll make a post for that when I've done all the tests.
Do you have the latest version of the VirtIO guest drivers installed?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!