Help startig with PVE-zsync

yena

Renowned Member
Nov 18, 2011
379
5
83
Hello, i'm testing
PVE-zsync on two server.

Src server (test1):
192.168.75.10

Dst server (test2):
192.168.75.11

Only one KVM server on test1.

I have this error:
------------------------------------------------------------
root@test1:~/.ssh# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
600 testDebian running 2048 20.00 2471

root@test1:~/.ssh# pve-zsync sync --source 600 --dest 192.168.75.11:rpool --verbose --maxsnap 2
disk is not on ZFS Storage

------------------------------------------------------------

On test2

root@test2:~/.ssh# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 16.9G 882G 96K /rpool
rpool/ROOT 942M 882G 96K /rpool/ROOT
rpool/ROOT/pve-1 942M 882G 942M /
rpool/STORAGE 96K 882G 96K /rpool/STORAGE
rpool/swap 15.9G 898G 64K -

------------------------------------------------------------

What doase it mean: "disk is not on ZFS Storage" ?


I have done this test:

pve-zsync sync --source rpool/ROOT/pve-1 /var/lib/vz/images/600/vm-600-disk-1.qcow2 --dest 192.168.75.11:rpool/STORAGE --verbose --maxsnap 2

but copy the entire root dir /

------------------------------------------------
root@test2:/rpool/STORAGE# cd pve-1/
root@test2:/rpool/STORAGE/pve-1# ls -la
total 172
drwxr-xr-x 24 root root 24 Nov 20 18:21 .
drwxr-xr-x 3 root root 3 Nov 20 18:31 ..
drwxr-xr-x 2 root root 140 Nov 20 18:18 bin
drwxr-xr-x 4 root root 14 Nov 20 16:48 boot
drwxr-xr-x 11 root root 1695 Oct 6 09:44 dev
drwxr-xr-x 96 root root 188 Nov 20 18:18 etc
drwx------ 2 root root 7 Nov 20 18:20 .gnupg
drwxr-xr-x 2 root root 2 Aug 26 18:31 home
drwxr-xr-x 18 root root 46 Nov 20 16:45 lib
drwxr-xr-x 2 root root 3 Oct 6 09:43 lib64
drwxr-xr-x 2 root root 2 Oct 6 09:43 media
drwxr-xr-x 2 root root 2 Oct 6 09:43 mnt
drwxr-xr-x 2 root root 2 Oct 6 09:43 opt
drwxr-xr-x 2 root root 2 Aug 26 18:31 proc
drwx------ 5 root root 11 Nov 20 17:48 root
drwxr-xr-x 2 root root 2 Nov 20 18:25 rpool
drwxr-xr-x 6 root root 7 Nov 20 18:20 run
drwxr-xr-x 2 root root 216 Nov 20 16:48 sbin
drwxr-xr-x 2 root root 2 Oct 6 09:43 srv
drwxr-xr-x 3 root root 3 Nov 20 18:21 STORAGE
drwxr-xr-x 2 root root 2 Apr 6 2015 sys
drwxrwxrwt 9 root root 9 Nov 20 18:26 tmp
drwxr-xr-x 10 root root 10 Oct 6 09:43 usr
drwxr-xr-x 11 root root 13 Oct 6 09:43 var
-------------------------------------------------------------------------



Thanks!!
 
Last edited:
Please can you post the VM config?

---------------------------------------------------------
root@test1:/etc/pve/nodes/test1/qemu-server# cat 600.conf
bootdisk: virtio0
cores: 4
ide2: local:iso/debian-8.1.0-amd64-CD-1.iso,media=cdrom
memory: 2048
name: testDebian
net0: virtio=32:66:64:66:64:38,bridge=vmbr0
numa: 0
ostype: l26
smbios1: uuid=cba3688f-a02f-49ed-a1d9-052f32d94277
sockets: 1
virtio0: local:600/vm-600-disk-1.qcow2,cache=writeback,size=20G
-------------------------------------------------------------------------



I have also tried a new test,
i have created a new pool (rpool/VPS2)
and i have add this one using the web pannel..
on this new zfs pool i have restored a backup to create a new VPS with id 700:

------------------------------------------------------------------------
root@test1:/etc/pve/nodes/test1/qemu-server# cat 700.conf
bootdisk: virtio0
cores: 4
ide2: local:iso/debian-8.1.0-amd64-CD-1.iso,media=cdrom
memory: 2048
name: testDebian
net0: virtio=32:66:64:66:64:38,bridge=vmbr0
numa: 0
ostype: l26
smbios1: uuid=cba3688f-a02f-49ed-a1d9-052f32d94277
sockets: 1
virtio0: VPS2:vm-700-disk-1,cache=writeback,size=20G




-------------------------------------------------------
root@test1:/etc/pve/nodes/test1/qemu-server# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
rpool 32.2G 1.72T 96K /rpool
rpool/ROOT 4.17G 1.72T 96K /rpool/ROOT
rpool/ROOT/pve-1 4.17G 1.72T 4.17G /
rpool/STORAGE 96K 1.72T 96K /rpool/STORAGE
rpool/VPS2 20.6G 1.72T 96K /rpool/VPS2
rpool/VPS2/vm-700-disk-1 20.6G 1.74T 1.37G -
rpool/swap 7.44G 1.73T 64K -
-------------------------------------------------------

In this way, now i can use:
pve-zsync sync --source 700 --dest 192.168.75.11:rpool/STORAGE2 --verbose --maxsnap 3

and the first time works well:
-------------------------------------------------------------
root@test2:~# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
rpool 18.2G 881G 96K /rpool
rpool/ROOT 942M 881G 96K /rpool/ROOT
rpool/ROOT/pve-1 942M 881G 942M /
rpool/STORAGE2 1.37G 881G 96K /rpool/STORAGE2
rpool/STORAGE2/vm-700-disk-1 1.37G 881G 1.37G -
rpool/swap 15.9G 897G 64K -
---------------------------------------------------------------


BUT i i try a second sinc:
-------------------------------------------------------------
root@test1:/etc/pve/nodes/test1/qemu-server# pve-zsync sync --source 700 --dest 192.168.75.11:rpool/STORAGE2 --verbose --maxsnap 3
send from @ to rpool/VPS2/vm-700-disk-1@rep_default_2015-11-20_19:39:23 estimated size is 1.73G
total estimated size is 1.73G
TIME SENT SNAPSHOT
warning: cannot send 'rpool/VPS2/vm-700-disk-1@rep_default_2015-11-20_19:39:23': Broken pipe
COMMAND:
zfs send -v -- rpool/VPS2/vm-700-disk-1@rep_default_2015-11-20_19:39:23 | ssh root@192.168.75.11 -- zfs recv -- rpool/STORAGE2/vm-700-disk-1@rep_default_2015-11-20_19:39:23
GET ERROR:
cannot receive new filesystem stream: destination 'rpool/STORAGE2/vm-700-disk-1' exists
must specify -F to overwrite it
------------------------------------------------------------------------------------

I have checkd with
zfs list -t all
and i have removed all snapshot ..
but no way..

Thanks!!
 
Hi,

it is not possible to sync a vm disk (vm-<VMID>-disk-<NO>) with pve-zync,
what resists not on zfs (zfspool) (your disk lay on storage local what is no zfspool).

It looks like you are on an old version please update.
 
Hi,

it is not possible to sync a vm disk (vm-<VMID>-disk-<NO>) with pve-zync,
what resists not on zfs (zfspool) (your disk lay on storage local what is no zfspool).

It looks like you are on an old version please update.

I have a fresh install, last apt update:

------------------------------------------------------------------------
root@test1:~# pveversion -v
proxmox-ve: 4.0-22 (running kernel: 4.2.3-2-pve)
pve-manager: 4.0-57 (running version: 4.0-57/cc7c2b53)
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-36
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-12
pve-container: 1.0-21
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6~jessie
-------------------------------------------


Thansk
 
I don't know on what repo you are, but you have to use for this the pvetest.
what includes the pve-zsync 1.6-5
you can also download it manually

Code:
wget http://download.proxmox.com/debian/dists/jessie/pvetest/binary-amd64/pve-zsync_1.6-3_all.deb
 
I already have 1.6-3:
---------------------------------------------------------------------------------------------------------------
dpkg -l|grep pve-zs
ii pve-zsync 1.6-3 all Proxmox VE storage management library
---------------------------------------------------------------------------------------------------------------

... i can't find 1.6-5 .. only 1.6-3

Thanks!!
 
Ok, i have Updated it to 1.6-6 :-)

Now it start but with this error:
"cannot receive new filesystem stream: dataset is busy"

------------------------------------------------------------------------------------

root@test1:~# pve-zsync sync --source 700 --dest 192.168.75.11:rpool/STORAGE2 --verbose --maxsnap 3
send from @ to rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10 estimated size is 1.73G
total estimated size is 1.73G
TIME SENT SNAPSHOT
12:13:12 86.8M rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:13 190M rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:14 285M rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:15 390M rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:16 453M rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:17 556M rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:18 657M rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:19 752M rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:20 836M rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:21 944M rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:22 1.02G rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:23 1.11G rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:24 1.21G rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:25 1.31G rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:26 1.42G rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:27 1.52G rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:28 1.63G rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
12:13:29 1.74G rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10
COMMAND:
zfs send -v -- rpool/VPS2/vm-700-disk-1@rep_default_2015-11-23_12:13:10 | ssh root@192.168.75.11 -- zfs recv -F -- rpool/STORAGE2/vm-700-disk-1
GET ERROR:
cannot receive new filesystem stream: dataset is busy

------------------------------------------------------------------------------------

root@test1:~# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
rpool 32.2G 1.72T 96K /rpool
rpool/ROOT 4.17G 1.72T 96K /rpool/ROOT
rpool/ROOT/pve-1 4.17G 1.72T 4.17G /
rpool/STORAGE 96K 1.72T 96K /rpool/STORAGE
rpool/VPS2 20.6G 1.72T 96K /rpool/VPS2
rpool/VPS2/vm-700-disk-1 20.6G 1.74T 1.37G -
rpool/swap

-------------------------------------------------

root@test2:~# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
rpool 18.2G 881G 96K /rpool
rpool/ROOT 944M 881G 96K /rpool/ROOT
rpool/ROOT/pve-1 944M 881G 944M /
rpool/STORAGE2 1.37G 881G 96K /rpool/STORAGE2
rpool/STORAGE2/vm-700-disk-1 1.37G 881G 1.37G -
rpool/swap 15.9G 897G 64K -

---------------------------------------------

I havent' iSCSI, or similar, no dmesg error:

Each server:
2HD 2TB ZFS Mirror ( no raid card )


Many Thanks!
 
Last edited:
If there are any problem please post them!
 
Last edited:
I have tryed to recover the KVM virtual machines on dest server (test2),
i have copied and modified the .conf file:

-----------------------------------------------------------------
root@test2:~# cat /etc/pve/nodes/test2/qemu-server/700.conf
bootdisk: virtio0
cores: 4
ide2: none,media=cdrom
memory: 2048
name: testDebian
net0: virtio=32:66:64:66:64:38,bridge=vmbr0
numa: 0
ostype: l26
smbios1: uuid=cba3688f-a02f-49ed-a1d9-052f32d94277
sockets: 1
virtio0: STORAGE2:vm-700-disk-1,cache=writeback,size=20G

-----------------------------------------------------------------

root@test2:~# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
rpool 18.4G 881G 96K /rpool
rpool/ROOT 945M 881G 96K /rpool/ROOT
rpool/ROOT/pve-1 945M 881G 945M /
rpool/STORAGE2 1.57G 881G 96K /rpool/STORAGE2
rpool/STORAGE2/vm-700-disk-1 1.57G 881G 1.57G -
rpool/STORAGE2/vm-700-disk-1@rep_default_2015-11-23_16:12:25 4.93M - 1.57G -
rpool/STORAGE2/vm-700-disk-1@rep_default_2015-11-23_16:18:32 192K - 1.57G -
rpool/STORAGE2/vm-700-disk-1@rep_default_2015-11-23_16:24:18 152K - 1.57G -
rpool/swap
-----------------------------------------------------------------


I have added STORAGE2 on my ZFS "storage" usign web panel,
and the vm now start and run on test2 at the last sync state. :-)

One question:
How can i start on test2 a previus snapshot ? ( for example "vm-700-disk-1@rep_default_2015-11-23_16:12:25" )

How can i recover some files mounting a Snapshot on test2 ?
I have tried some zfs command like:

zfs set snapdir=visible rpool/STORAGE2

root@test2:/rpool# zfs mount -o ro rpool/STORAGE2/vm-700-disk-1
cannot open 'rpool/STORAGE2/vm-700-disk-1': operation not applicable to datasets of this type

mount -t zfs rpool/STORAGE2/vm-700-disk-1 /home/mount/
filesystem 'rpool/STORAGE2/vm-700-disk-1' cannot be mounted, unable to open the dataset


but no way :-)




Thanks!
 
Last edited:
One question:
How can i start on test2 a previus snapshot ? ( for example "vm-700-disk-1@rep_default_2015-11-23_16:12:25" )
never directly try to start a Vm on the backup image.
you change the image and can not sync anymore or override the content what you make local.

The correct way would be zfs send pool rpool/STORAGE2/vm-700-disk-1@rep_default_2015-11-23_16:12:25 | zfs receive rpool/<NewSubvol>/vm-<VMID>-disk-1
or if you want only a quick view use zfs clone

How can i recover some files mounting a Snapshot on test2 ?
I have tried some zfs command like:

zfs set snapdir=visible rpool/STORAGE2
This would only work with zfs file system (LXC) not with zvol what we use on KVM.
But in this way you don't have to mount them.
in every zfs fs is an hidden file .zfs/snapshot/<snapname> where you can access the snapshot.
which you can make visible with

zfs set snapdir=visible rpool/STORAGE2

and when you want mount zvol you have to mount them with mount and path /dev/zvol/<pool>/...[@snapname]
 
So.. i have create a new pool:
rpool/RESTORED 1.57G 879G 96K /rpool/RESTORED

and tested:

zfs send rpool/STORAGE2/vm-700-disk-1@rep_default_2015-11-23_16:18:32 | zfs receive rpool/RESTORED/vm-800-disk-1

And works well ! :
----------------------------------------------------------------------------------------------
root@test2:/dev/zvol/rpool# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
rpool 20.0G 879G 96K /rpool
rpool/RESTORED 1.57G 879G 96K /rpool/RESTORED
rpool/RESTORED/vm-800-disk-1 1.57G 879G 1.57G -
rpool/RESTORED/vm-800-disk-1@rep_default_2015-11-23_16:18:32 1.65M - 1.57G --
----------------------------------------------------------------------------------------------

I have Start the "cloned" snapshot and it start and it's good.
Now, i see a snapshot on my new pool,
can i delete "rpool/RESTORED/vm-800-disk-1@rep_default_2015-11-23_16:18:32" ?

Second step, i have test the procedure to recover some files from my restored backup:
i have this:
---------------------------------------------------------------------------------
root@test2:/dev/zvol/rpool# ls -la /dev/zvol/rpool/RESTORED/
total 0
drwxr-xr-x 2 root root 120 Nov 24 12:49 .
drwxr-xr-x 4 root root 100 Nov 24 12:49 ..
lrwxrwxrwx 1 root root 13 Nov 24 12:49 vm-800-disk-1 -> ../../../zd32
lrwxrwxrwx 1 root root 15 Nov 24 12:49 vm-800-disk-1-part1 -> ../../../zd32p1
lrwxrwxrwx 1 root root 15 Nov 24 12:49 vm-800-disk-1-part2 -> ../../../zd32p2
lrwxrwxrwx 1 root root 15 Nov 24 12:49 vm-800-disk-1-part5 -> ../../../zd32p5
------------------------------------------------------------------------------------------------

But i don't know the mount procedure :-)
i have tried:

-------------------------------------------------------------------------------------------------------------
mkdir /mnt/restored
root@test2:/dev/zvol/rpool# mount /dev/zvol/rpool/RESTORED/vm-800-disk-1 /mnt/restored
mount: wrong fs type, bad option, bad superblock on /dev/zd32,
missing codepage or helper program, or other error


In some cases useful info is found in syslog - try
dmesg | tail or so.
---------------------------------------------------------------------------------------------
But without luk ...


Thanks!!
 
The snapshot will normally handled by pve-zsync.

You have more partition on the zvol, so you have to mount the partition not the disk.
treat the zvol like a regular disk only it do not call /dev/sdx it call /dev/zvol/pool/name
 
The snapshot will normally handled by pve-zsync.

You have more partition on the zvol, so you have to mount the partition not the disk.
treat the zvol like a regular disk only it do not call /dev/sdx it call /dev/zvol/pool/name

I have tested the recovery of single files mounting the backup on the second server,
it 's simple without LVM inside the image:

mount /dev/zvol/rpool/RESTORED/vm-800-disk-1-part5 /mnt/restored

Works well.

Regarding LVM partitions is not easy .. i have to read first bytes of the image.. recover manualy all the LVM structure ( VG LV .. )
i think this is'nt the right way.. is more simple mount the image using a like Knoppix live CD ... but also in this way is a long work.

---------------------------

LXC is more simle to recover,
after a
pve-zsync sync --source 100 --dest 192.168.75.11:rpool/STORAGE2 --verbose --maxsnap 3

i find
rpool/VPS2/subvol-100-disk-1 on /rpool/VPS2/subvol-100-disk-1 type zfs (rw,noatime,xattr,noacl)
and here i can recover my files.

I have noticed that it is automount in RW.
the risk to manage this snapshot in RW is like KVM ?
Have i to clone it like in KVM case ?

Thanks

Thanks !!
 
Last edited:
If you restart the sync yes you should clone it, to ensure you do not change the data.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!