[SOLVED] issue with PVEZSYNC?

killmasta93

Active Member
Aug 13, 2017
611
33
33
26
Hi,
I was wondering if someone could give me a hand on the issue im having. Currently trying to use pvezync but im getting an error in path issue.
this is the command im running
Code:
pve-zsync create --source 101 --dest 192.168.100.252:rpool/vmbackup --verbose --maxsnap 7 --name snapshotzeus
on node 192.168.100.252
Code:
rpool                     6.94G   892G    96K  /rpool
rpool/ROOT                2.89G   892G    96K  /rpool/ROOT
rpool/ROOT/pve-1          2.89G   892G  2.89G  /
rpool/data                4.04G   892G    96K  /rpool/data
rpool/data/vm-103-disk-0  1.98G   892G  1.98G  -
rpool/data/vm-104-disk-1  2.07G   892G  2.07G  -
rpool/vmbackup              96K   400G    96K  /rpool/vmbackup
so im not sure why its happening?

Thank you
 

tim

Proxmox Staff Member
Staff member
Oct 1, 2018
288
29
28
Can you please post the error output?
What is you intention, replication to the same node?
As I read the command you are syncing the vm 101 on node .252 to the same .252 node?
 

killmasta93

Active Member
Aug 13, 2017
611
33
33
26
Thanks for the reply, on node 192.168.100.253 im running this command
Code:
pve-zsync create --source 101 --dest 192.168.100.252:rpool/vmbackup --verbose --maxsnap 7 --name snapshotzeus
but when i run it i get this error

Code:
ERROR: in path
 

mira

Proxmox Staff Member
Staff member
Aug 1, 2018
484
39
28
Is 101 a VM or CT? This error means the path returned by 'pvesm path <storage>:<disk>' does not match /dev/zvol/<pool>/<disk> for VMs or if it is a CT any path ending in <disk>.
Can you post the output of 'pvesm path <storage>:<disk>' for every disk of 101?
Please also post the config ('qm config 101' or 'pct config 101') and the output of 'ls /dev/zvol/<pool> | grep 101' if it is a VM.
 

killmasta93

Active Member
Aug 13, 2017
611
33
33
26
thanks for the reply the 101 is a vm inside of 192.168.100.253. the 192.168.100.252 is another node inside that cluster which i want to sync it to have a backup.
as for the pvesm path
Code:
pvesm path local-zfs:vm-101-disk-0
/dev/zvol/rpool/data/vm-101-disk-0
as for the other output i got this
Code:
root@prometheus:~# ls /dev/zvol/rpool/data/ | grep 101
vm-101-disk-0
vm-101-disk-0-part1
vm-101-disk-0-part2
and this
Code:
agent: 1
bootdisk: virtio0
cores: 2
memory: 8000
name: Zeus
net0: virtio=A6:81:B7:FF:06:C3,bridge=vmbr0
numa: 0
onboot: 1
ostype: win8
scsihw: virtio-scsi-pci
smbios1: uuid=d48eb2e7-3b25-44b1-85c1-8b7d95a8e388
sockets: 1
virtio0: local-zfs:vm-101-disk-0,cache=writeback,size=128G
vmgenid: ed00af80-1c07-4df3-a197-d8ba513b8d29
Thank you
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,823
391
103
Hi,

can you please send the output of
Code:
pveversion -v
 

killmasta93

Active Member
Aug 13, 2017
611
33
33
26
thanks for the reply
this is what i get

Code:
proxmox-ve: 5.3-1 (running kernel: 4.15.18-10-pve)
pve-manager: 5.3-8 (running version: 5.3-8/2929af8e)
pve-kernel-4.15: 5.3-1
pve-kernel-4.15.18-10-pve: 4.15.18-32
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-44
libpve-guest-common-perl: 2.0-19
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-36
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-2
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-22
pve-cluster: 5.0-33
pve-container: 2.0-33
pve-docs: 5.3-1
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-17
pve-firmware: 2.0-6
pve-ha-manager: 2.0-6
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 3.10.1-1
pve-zsync: 1.7-3
qemu-server: 5.0-45
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,823
391
103
Ok this looks ok, but this is not the current version.

I test your settings and this should all work.
On which node do you run pve-zync?
And what node is the other (OS distribution)?

Is my assumption correct that prometheus is the host where the VM runs?
 

killmasta93

Active Member
Aug 13, 2017
611
33
33
26
Thanks for the reply,
on node 192.168.100.253 (prometheus) runs pve-zsync and sends to 192.168.100.252 on node 2

Thank you
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,823
391
103
Is node2 also a Proxmox VE?
If yes what version do you use there?
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,823
391
103
To understand your setup complete please send me all output of the following output form both nodes.
Code:
zfs list -t all
zpool status
cat /etc/cron.d/pve-zsync
cat /var/lib/pve-zsync/sync_state
uname -a 
dpkg -l zfsutils-linux libpve-storage-perl
 

killmasta93

Active Member
Aug 13, 2017
611
33
33
26
Thanks for the reply, so i thought it was that node promethues2 but instead i tried sending a snapshot from prometheus3 to prometheus and same issue

this is prometheus (192.168.100.253)
Code:
root@prometheus:~# zfs list -t all
NAME                                                      USED  AVAIL  REFER  MOUNTPOINT
rpool                                                    16.3G  1.74T   104K  /rpool
rpool/ROOT                                               6.55G  1.74T    96K  /rpool/ROOT
rpool/ROOT/pve-1                                         6.55G  1.74T  6.55G  /
rpool/data                                               9.72G  1.74T    96K  /rpool/data
rpool/data/vm-101-disk-0                                 6.97G  1.74T  6.96G  -
rpool/data/vm-101-disk-0@__replicate_101-0_1555111500__  1.25M      -  6.96G  -
rpool/data/vm-102-disk-0                                  776M  1.74T   775M  -
rpool/data/vm-102-disk-0@__replicate_102-0_1555111506__   764K      -   775M  -
rpool/data/vm-103-disk-0                                 2.00G  1.74T  2.00G  -
rpool/data/vm-103-disk-0@__replicate_103-0_1554441241__     0B      -  2.00G  -
rpool/vmbackup                                             96K   400G    96K  /rpool/vmbackup
Code:
root@prometheus:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda3    ONLINE       0     0     0
            sdb3    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0

errors: No known data errors
Code:
root@prometheus:~# cat /etc/cron.d/pve-zsync
root@prometheus:~#
Code:
root@prometheus:~# uname -a
Linux prometheus 4.15.18-10-pve #1 SMP PVE 4.15.18-32 (Sat, 19 Jan 2019 10:09:37 +0100) x86_64 GNU/Linux
Code:
root@prometheus:~# dpkg -l zfsutils-linux libpve-storage-perl
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                                                          Version                             Architecture                        Description
+++-=============================================================-===================================-===================================-================================================================================================================================
ii  libpve-storage-perl                                           5.0-36                              all                                 Proxmox VE storage management library
ii  zfsutils-linux                                                0.7.12-pve1~bpo1                    amd64                               command-line tools to manage OpenZFS filesystems
 

killmasta93

Active Member
Aug 13, 2017
611
33
33
26
and this is prometheus 3 (192.168.100.251)

Code:
root@prometheus3:~# zfs list -t all
NAME                                                      USED  AVAIL  REFER  MOUNTPOINT
rpool                                                    10.7G   439G    96K  /rpool
rpool/ROOT                                                891M   439G    96K  /rpool/ROOT
rpool/ROOT/pve-1                                          891M   439G   891M  /
rpool/data                                               9.79G   439G    96K  /rpool/data
rpool/data/vm-101-disk-0                                 6.96G   439G  6.96G  -
rpool/data/vm-101-disk-0@__replicate_101-0_1555111620__     0B      -  6.96G  -
rpool/data/vm-102-disk-0                                  775M   439G   775M  -
rpool/data/vm-102-disk-0@__replicate_102-0_1555111626__     0B      -   775M  -
rpool/data/vm-104-disk-0                                   56K   439G    56K  -
rpool/data/vm-104-disk-1                                 2.07G   439G  2.07G  -
rpool/data/vm-104-disk-1@__replicate_104-0_1554441301__     8K      -  2.07G  -
rpool/vmbackup                                             96K   400G    96K  /rpool/vmbackup
Code:
root@prometheus3:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          sda3      ONLINE       0     0     0

errors: No known data errors
Code:
root@prometheus3:~# cat /etc/cron.d/pve-zsync
root@prometheus3:~#
Code:
root@prometheus3:~# uname -a
Linux prometheus3 4.15.18-10-pve #1 SMP PVE 4.15.18-32 (Sat, 19 Jan 2019 10:09:37 +0100) x86_64 GNU/Linux
Code:
root@prometheus3:~# dpkg -l zfsutils-linux libpve-storage-perl
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                      Version           Architecture      Description
+++-=========================-=================-=================-=======================================================
ii  libpve-storage-perl       5.0-36            all               Proxmox VE storage management library
ii  zfsutils-linux            0.7.12-pve1~bpo1  amd64             command-line tools to manage OpenZFS filesystems
 

killmasta93

Active Member
Aug 13, 2017
611
33
33
26
o wow.. well this is odd. i ran the same command on the shell on proxmox and it worked. and on my ssh terminal it wont work. not sure if it has to do something with perl.

this is what i get when i do it thought ssh on my kubuntu desktop
Code:
root@prometheus3:~# pve-zsync create --source 104 --dest 192.168.100.253:rpool/vmbackup --verbose --maxsnap 7 --name backupubuntu
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = (unset),
        LC_ALL = (unset),
        LC_MEASUREMENT = "es_CO.UTF-8",
        LC_PAPER = "es_CO.UTF-8",
        LC_MONETARY = "es_CO.UTF-8",
        LC_NAME = "es_CO.UTF-8",
        LC_ADDRESS = "es_CO.UTF-8",
        LC_NUMERIC = "es_CO.UTF-8",
        LC_TELEPHONE = "es_CO.UTF-8",
        LC_IDENTIFICATION = "es_CO.UTF-8",
        LC_TIME = "es_CO.UTF-8",
        LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
ERROR: in path
 

wolfgang

Proxmox Staff Member
Staff member
Oct 1, 2014
5,823
391
103
Ok, you have two problems.
1.) you have to set you LC VAR.
2.) you snap states are not the same and so you are not able to resume replication.
This can't be fixed.
You have to remove the snapshots at the source
And delete the images on the backup side.

1.) https://wiki.debian.org/Locale
 

ozgurerdogan

Active Member
May 2, 2010
465
2
38
Bursa, Turkey, Turkey
Code:
root@backup:/D4BACKUP# ./pve-zsync.sh
ERROR: in path
ERROR: in path
ERROR: in path
...
...

./pve-zsync.sh: line 1: $'\r': command not found
./pve-zsync.sh: line 2: $'\r': command not found
But some syncs are done. I just upgraded to v6. With v5 it was running fine. I am syncing to different node via ip.

Code:
/usr/sbin/pve-zsync sync --source 1.2.3.4:103 --dest D2 --name 103 --maxsnap 10 --method ssh; ... ... ... ..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!