pve-zsync "failed to read from stream" (any snapshot)

chencho

Well-Known Member
Nov 25, 2010
92
12
48
Hi all.

Some time ago I configure two server to send snapshots to a third server.

Server 1 works fine.

Server 2 not (after some time working)

I upgrade all the servers, and the only difference I found is when I did:

Code:
zfs list -t snapshot

In server 1 I have a very big list.

In server 2 I have nothing.

When I try in server 2

Code:
pve-zsync sync --source 904 --dest 10.0.0.1:rpool/server2 --name gitlab --maxsnap 48 --method ssh

I have

Code:
internal error: Argumento inválido
Job --source 904 --name gitlab got an ERROR!!!
ERROR Message:
COMMAND:
    zfs send -- backups/subvol-904-disk-1@rep_gitlab_2018-05-30_17:24:04 | ssh -o 'BatchMode=yes' root@10.0.0.1 -- zfs recv -F -- rpool/server4/subvol-904-disk-1
GET ERROR:
    cannot receive: failed to read from stream

And I think is because backups/subvol-904-disk-1@rep_gitlab_2018-05-30_17:24:04 doesnt exists. But I don't know why!
 
Hi,

please send the output of this command.

on the source server (server 2)
Code:
qm config 904
zfs list -t all 
pveversion -v

on the destination server (backup)
Code:
zfs list -t all
 
Hi,

I use lxc, not qm

Server4 (source)
zfs list -t all
Code:
backups                                                   605G  1,17T   563G  /backups
backups/subvol-902-disk-1                                8,14G  11,9G  8,14G  /backups/subvol-902-disk-1
backups/subvol-903-disk-1                                13,4G  6,58G  13,4G  /backups/subvol-903-disk-1
backups/subvol-904-disk-1                                20,5G  19,5G  20,5G  /backups/subvol-904-disk-1
rpool                                                    96,2G   334G    96K  /rpool
rpool/ROOT                                               1,87G   334G    96K  /rpool/ROOT
rpool/ROOT/pve-1                                         1,87G   334G  1,77G  /
rpool/ROOT/pve-1@zfs-auto-snap_frequent-2018-05-30-1030  11,4M      -  1,77G  -
rpool/ROOT/pve-1@zfs-auto-snap_daily-2018-05-30-2319     9,53M      -  1,77G  -
rpool/ROOT/pve-1@zfs-auto-snap_daily-2018-05-31-2319     5,31M      -  1,77G  -
rpool/ROOT/pve-1@zfs-auto-snap_monthly-2018-05-31-2344   5,39M      -  1,77G  -
rpool/data                                               89,7G   334G   104K  /rpool/data
rpool/data/subvol-601-disk-1                             2,01G  48,0G  2,01G  /rpool/data/subvol-601-disk-1
rpool/data/subvol-602-disk-1                             2,31G  17,7G  2,31G  /rpool/data/subvol-602-disk-1
rpool/data/subvol-603-disk-1                             1,86G  48,1G  1,86G  /rpool/data/subvol-603-disk-1
rpool/data/subvol-604-disk-1                             2,23G  17,8G  2,23G  /rpool/data/subvol-604-disk-1
rpool/data/subvol-605-disk-1                             1,77G  48,2G  1,77G  /rpool/data/subvol-605-disk-1
rpool/data/subvol-700-disk-1                             11,0G  9,04G  11,0G  /rpool/data/subvol-700-disk-1
rpool/data/subvol-701-disk-1                             57,2G  12,8G  57,2G  /rpool/data/subvol-701-disk-1
rpool/data/subvol-702-disk-1                             1,58G  48,4G  1,58G  /rpool/data/subvol-702-disk-1
rpool/data/subvol-703-disk-1                             1,35G  48,7G  1,35G  /rpool/data/subvol-703-disk-1
rpool/data/subvol-704-disk-1                             1,75G  48,3G  1,75G  /rpool/data/subvol-704-disk-1
rpool/data/subvol-705-disk-1                             3,31G  46,7G  3,31G  /rpool/data/subvol-705-disk-1
rpool/data/subvol-799-disk-1                             2,24G  47,8G  2,24G  /rpool/data/subvol-799-disk-1
rpool/data/subvol-901-disk-1                             1,06G  18,9G  1,06G  /rpool/data/subvol-901-disk-1
rpool/swap                                               4,25G   335G  2,73G  -

pveversion -v

Code:
proxmox-ve: 5.2-2 (running kernel: 4.10.17-4-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.10.17-4-pve: 4.10.17-24
pve-kernel-4.10.17-1-pve: 4.10.17-18
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: not correctly installed
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-9
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
pve-zsync: 1.6-15
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9

Server 3 (backup storage). 101, 102, etc comes from server5 which works fine

Code:
hdd                                                                1,32T  2,19T  1,32T  /hdd
hdd/local                                                            24K  2,19T    24K  /hdd/local
hdd/subvol-106-disk-1                                                24K  2,19T    24K  /hdd/subvol-106-disk-1
rpool                                                               218G   642G    96K  /rpool
rpool/ROOT                                                         1,28G   642G    96K  /rpool/ROOT
rpool/ROOT/pve-1                                                   1,28G   642G  1,28G  /
rpool/data                                                           96K   642G    96K  /rpool/data
rpool/server4                                                        96K   642G    96K  /rpool/server4
rpool/server5                                                        96K   642G    96K  /rpool/server5
rpool/subvol-101-disk-1                                             149G   642G   149G  /rpool/subvol-101-disk-1
rpool/subvol-101-disk-1@rep_domain_2018-05-29_18:48:43  2,35M      -   149G  -
rpool/subvol-102-disk-1                                            36,3G   642G  36,3G  /rpool/subvol-102-disk-1
rpool/subvol-102-disk-1@rep_domain_2018-05-29_23:36:51   540K      -  36,3G  -
rpool/subvol-103-disk-1                                            17,4G   642G  17,4G  /rpool/subvol-103-disk-1
rpool/subvol-103-disk-1@rep_domain_2018-05-30_00:51:18            316K      -  17,4G  -
rpool/subvol-106-disk-1                                            4,92G   642G  4,92G  /rpool/subvol-106-disk-1
rpool/subvol-106-disk-1@rep_domain_2018-05-30_01:39:28        284K      -  4,92G  -
rpool/subvol-121-disk-1                                            4,87G   642G  4,87G  /rpool/subvol-121-disk-1
rpool/subvol-121-disk-1@rep_domain_2018-05-29_18:30:01        380K      -  4,87G  -
rpool/swap                                                         4,25G   647G    64K  -
rpool/vztmp                                                          96K   642G    96K  /mnt/vztmp

pveversion -v

Code:
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-9
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
pve-zsync: 1.6-15
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9
 
Code:
arch: amd64
cpulimit: 3
hostname: DOMAIN
memory: 4000
net0: name=eth0,bridge=vmbr0,gw=145.239.11.254,hwaddr=02:00:00:4b:1a:bc,ip=IP/32,type=veth
onboot: 1
ostype: ubuntu
parent: daily_5_pvebackup
rootfs: vzhdd:subvol-904-disk-1,size=40G
swap: 2048
lxc.cgroup.devices.allow: a
lxc.cap.drop:
Thanks.
 
Hi,

the problem is that you get a wrong path.

Code:
 zfs send -- backups/subvol-904-disk-1@rep_gitlab_2018-05-30_17:24:04 | ssh -o 'BatchMode=yes' root@10.0.0.1 -- zfs recv -F -- rpool/server4/subvol-904-disk-1

this is wrong
rpool/server4/subvol-904-disk-1
and should be
rpool/server2/subvol-904-disk-1

but you have no server2 on the backup storage.
can you please send the storage config from the source server.
Code:
cat /etc/pve/storage.cfg
and the crontab
Code:
 cat /etc/cron.d/pve-zsync
 
Last edited:
Not really.

I'm using server4 and server5 for production, and server6 for backup storage

Code:
root@server4:~# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content images,vztmpl,rootdir,iso
    maxfiles 0

zfspool: backups
    pool backups
    content images,rootdir
    sparse 0

zfspool: vz
    pool rpool/data
    content rootdir,images
    sparse 0

zfspool: vzhdd
    pool backups
    content images,rootdir
    sparse 0

Code:
root@server6:~# cat /etc/pve/storage.cfg
dir: hdd
    path /hdd
    content rootdir,images

dir: local
    path /var/lib/vz
    content vztmpl,iso,images,rootdir
    maxfiles 0

zfspool: vz
    pool rpool/server4
    content rootdir,images
    sparse 0

zfspool: vzhdd
    pool rpool/server4
    content images,rootdir
    sparse 0