pve-zsync failed uninitialized value

DynFi User

Renowned Member
Apr 18, 2016
149
17
83
49
dynfi.com
Hello folks,


I have used the fantastic pve-zsync with success on all my containers.
But one of them is giving me a hard time…

Since couple of days and without any changes, It gives me trouble because of "uninitialized values".

pve-zsync sync --source 110 --dest 192.168.210.28:tank/proxmini --name NewMail --maxsnap 120 --limit 6250 --method ssh

Use of uninitialized value $stor in concatenation (.) or string at /usr/sbin/pve-zsync line 784.
Use of uninitialized value $disk in concatenation (.) or string at /usr/sbin/pve-zsync line 784.
COMMAND:
pvesm path ''
GET ERROR:
400 Parameter verification failed.
volume: invalid format - unable to parse volume ID ''

pvesm path <volume>

Job --source 110 --name NewMail got an ERROR!!!
ERROR Message:


I have been looking here and there without any success.
I can't really tell where the problem comes from… furthermore since the other LXC containers are backed up without any trouble.


pve-manager/4.2-14/655f944a (running kernel: 4.4.10-1-pve)


As far as I can tell problem is occuring since the update the 4.2-14

The only "specific thing" that this container has is an NFS mount inside the LXC container.
Any help will be appreciated.
 
Hi,
can you please tell my on what version you are?
use
pveversion -v
dpkg-query --show pve-zsync
 
I'm not 100% sure, but I think the new version 1.6.10 will fix you problem.
 
I am already at 1.6-10

pve-zsync1.6-10


proxmox-ve: 4.2-54 (running kernel: 4.4.10-1-pve)

pve-manager: 4.2-14 (running version: 4.2-14/655f944a)

pve-kernel-4.4.6-1-pve: 4.4.6-48

pve-kernel-4.2.6-1-pve: 4.2.6-36

pve-kernel-4.4.8-1-pve: 4.4.8-52

pve-kernel-4.2.8-1-pve: 4.2.8-41

pve-kernel-4.2.2-1-pve: 4.2.2-16

pve-kernel-4.4.10-1-pve: 4.4.10-54

pve-kernel-4.2.3-2-pve: 4.2.3-22

lvm2: 2.02.116-pve2

corosync-pve: 2.3.5-2

libqb0: 1.0-1

pve-cluster: 4.0-42

qemu-server: 4.0-80

pve-firmware: 1.1-8

libpve-common-perl: 4.0-68

libpve-access-control: 4.0-16

libpve-storage-perl: 4.0-54

pve-libspice-server1: 0.12.5-2

vncterm: 1.2-1

pve-qemu-kvm: 2.5-19

pve-container: 1.0-68

pve-firewall: 2.0-29

pve-ha-manager: 1.0-31

ksm-control-daemon: 1.2-1

glusterfs-client: 3.5.2-2+deb8u2

lxc-pve: 1.1.5-7

lxcfs: 2.0.0-pve2

cgmanager: 0.39-pve1

criu: 1.6.0-1

zfsutils: 0.6.5-pve9~jessie

openvswitch-switch: 2.5.0-1
 
Ok so I found a bypass to solve this bug.

  1. I destroyed the pve-zsync job
  2. I destroyed the snapshot and all children
  3. I re-created the pve-zsync job using the zfs path and not the VMID

==> NOT Working :
pve-zsync create --source 110 --dest 192.168.210.28:tank/proxmini --limit 12500 --verbose --maxsnap 60 --name NewMail

==> Working :
pve-zsync create --source rpool/subvol-110-disk-1 --dest 192.168.210.28:tank/proxmini --limit 12500 --verbose --maxsnap 60 --name NewMail


This has allowed me to get my snapshot up and running.
Obviously a bug with --source VMID
 
could you post the configuration of that VM ("/etc/pve/local/qemu/110.conf") and the storage configuration ("/etc/pve/storage.cfg")?
 
It is not a qemu server but an LXC container :

root@proxmini:/home/gregober# cat /etc/pve/lxc/110.conf

#server%3A in production

#service%3A mail server and antispam

#

#eth0 %3A 2xx.xx1.1x2.9

#eth1 %3A 192.168.210.26 (newmail.osnet.lan)

#eth2 %3A 192.168.220.26

arch: amd64

cpulimit: 2

cpuunits: 1024

hostname: newmail.xxx.yyy

memory: 5120

mp0: /mnt/pve/newmail_data,mp=/home/mail/virtual

nameserver: 192.168.210.106

net0: bridge=vmbr0,gw=2xx.xx1.1x2.1,hwaddr=3A:64:62:62:63:62,ip=2xx.xx1.1x2.9/28,name=eth0,tag=213,type=veth

net1: bridge=vmbr1,hwaddr=62:30:64:32:34:35,ip=192.168.210.26/24,name=eth1,tag=210,type=veth

net2: bridge=vmbr1,hwaddr=3A:61:31:33:31:35,ip=192.168.220.26/24,name=eth2,tag=220,type=veth

onboot: 1

ostype: ubuntu

rootfs: ZFS:subvol-110-disk-1,size=256G

searchdomain: 192.168.210.25

swap: 4096

What's specific about this container is that It has an NFS mount inside - I would bet on that for the bug.
PVE-zsync does not seem to handle NFS mount correctly.




root@proxmini:/home/gregober# cat /etc/pve/storage.cfg

dir: local

path /var/lib/vz

maxfiles 0

content iso,rootdir,vztmpl,images,backup


zfspool: ZFS

pool rpool

content images,rootdir


nfs: tide_vzbackup

export /mnt/tank/NFS/vzbackup

server 192.168.210.28

path /mnt/pve/tide_vzbackup

content backup

options vers=3

maxfiles 10

nodes proxmini


nfs: tide_local

export /mnt/tank/NFS/proxmox

server 192.168.210.28

path /mnt/pve/tide_local

content images,backup,vztmpl,rootdir,iso

options vers=3

maxfiles 10


nfs: newmail_data

export /mnt/data/newmail/virtual

server 192.168.210.140

path /mnt/pve/newmail_data

content backup,rootdir

options vers=4

maxfiles 1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!