very odd issue with pve-zync?

killmasta93

Renowned Member
Aug 13, 2017
958
56
68
30
Hi i was wondering if someone else has had this issue before,
currently it was working fine pve-zync but then it stopped working. i ran the command manually and i got this

what i never knew is that pve-zync also copies the config


Code:
send from @rep_drpandora_2021-06-10_21:01:55 to rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35 estimated size is 38.2M
total estimated size is 38.2M
TIME        SENT   SNAPSHOT rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:39   2.08M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:40   3.14M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:41   4.74M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:42   6.37M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:43   7.99M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:44   9.62M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:45   11.2M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:46   12.9M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:47   14.5M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:48   16.1M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:49   17.7M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:50   19.4M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:51   21.0M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:52   22.6M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:53   24.2M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:54   25.8M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:55   27.2M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:56   29.0M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:57   30.5M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:58   32.1M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:42:59   33.7M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:43:00   35.4M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35
22:43:01   36.9M   rpool/data/vm-157-disk-0@rep_drpandora_2021-06-10_22:42:35


Job --source 157 --name drpandora got an ERROR!!!
ERROR Message:
COMMAND:
        scp -- /etc/pve/local/qemu-server/157.conf 'root@[190.xx.xx.xx]:/var/lib/pve-zsync/casa/157.conf.qemu.rep_drpandora_2021-06-10_22:42:35'
GET ERROR:
        Connection closed by 190.xx.xx.xx port 22
lost connection
 
Hi,
could you share the pve-zsync command you used? Do you get same error when executing just the scp command? What about other scp commands to the server? Please also share your pveversion -v.
 
Thanks for the reply, whats odd is that i rebooted the remote server and it started to work

the command

Code:
0 * * * * root pve-zsync sync --source 157 --dest 190.145.7.130:rpool/data/casa --name drpandora --maxsnap 7 --method ssh --source-user root --dest-user root

Remote server:


Code:
root@prometheus2:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
pve-zsync: 2.2
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

the server that is sending the info

Code:
root@prometheus:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
pve-zsync: 2.0-3
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 
@Fabian_E Hi there, I wanted to repost again because after month later got the same issue, now the reboot wont work anymore to fix it. So i updated to 6.4.-13
what im thinking it might be the server side is receiving lots of SSH though ZFS. Is there a way to expand the timeout?

Code:
Connection closed by 190.xx.xx port 22
COMMAND:
    ssh root@190.xx.xx.xx -- rm -f -- /var/lib/pve-zsync/syp/107.conf.qemu.rep_drpzeus_2021-07-11_06:09:04
GET ERROR:
    Connection closed by 190.145.7.130 port 22

Job --source 107 --name drpzeus got an ERROR!!!
ERROR Message:

i run it manually sometimes it works sometimes i get the timeout error

thank you
 
Is your network connection stable? Anything special in journalctl -u ssh.service from around the time of the issue on the remote host? You could try to increase ClientAliveCountMax and ClientAliveInterval on the remote host (and restart the sshd). See man 5 sshd_config for more information.
 
  • Like
Reactions: killmasta93
Thanks for the reply, so what i did is deleted all the snapshots and the zfs disk on the remote site and re did it so far its finishing the sync but so odd that this happened
 
@Fabian_E so today again getting this odd issue alert as its fails
Code:
COMMAND:zfs send -i rpool/data/vm-106-disk-0@rep_drpolympus_2021-08-10_16:05:17 -- rpool/data/vm-106-disk-0@rep_drpolympus_2021-08-10_17:49:03 | ssh -o 'BatchMode=yes' root@190.xxx.xxx -- zfs recv -F -- rpool/data/syp/vm-106-disk-0GET ERROR:Connection closed by 190.xxx.xx port 22Job --source 106 --name drpolympus got an ERROR!!!ERROR Message:

also this

Code:
COMMAND:
    scp -- /etc/pve/local/qemu-server/107.conf 'root@[190.xx.xx]:/var/lib/pve-zsync/syp/107.conf.qemu.rep_drpzeus_2021-08-10_16:12:14'
GET ERROR:
    Connection closed by 190.xxx.xxx port 22
lost connection

Job --source 107 --name drpzeus got an ERROR!!!
ERROR Message:

and this is my ssh config

Code:
root@prometheus2:~# cat /etc/ssh/sshd_config
#    $OpenBSD: sshd_config,v 1.103 2018/04/09 20:41:22 tj Exp $

# This is the sshd server system-wide configuration file.  See
# sshd_config(5) for more information.

# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin

# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented.  Uncommented options override the
# default value.

#Port 22
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::

#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed25519_key

# Ciphers and keying
#RekeyLimit default none

# Logging
#SyslogFacility AUTH
#LogLevel INFO

# Authentication:

#LoginGraceTime 2m
PermitRootLogin yes
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10

#PubkeyAuthentication yes

# Expect .ssh/authorized_keys2 to be disregarded by default in future.
#AuthorizedKeysFile    .ssh/authorized_keys .ssh/authorized_keys2

#AuthorizedPrincipalsFile none

#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody

# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes

# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
#PermitEmptyPasswords no

# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no

# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no

# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no

# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication.  Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM yes

#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
PrintMotd no
#PrintLastLog yes
#TCPKeepAlive yes
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
#PidFile /var/run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none

# no default banner path
#Banner none

# Allow client to pass locale environment variables
AcceptEnv LANG LC_*

# override default of no subsystems
Subsystem    sftp    /usr/lib/openssh/sftp-server

# Example of overriding settings on a per-user basis
#Match User anoncvs
#    X11Forwarding no
#    AllowTcpForwarding no
#    PermitTTY no
#    ForceCommand cvs server
 
i checked on the ssh logs of the remote site that is sending the zfs data i see this

Code:
Aug 10 22:49:57 prometheus2 sshd[4832]: Accepted publickey for root from 201.xx.xx..xx port 2288 ssh2: RSA SHA256:lKNW3g4Mo0HUKolRlv9EmSACaMfb/6xzRCjWDRLudGs
Aug 10 22:49:57 prometheus2 sshd[4832]: pam_unix(sshd:session): session opened for user root by (uid=0)
Aug 10 22:49:58 prometheus2 sshd[5318]: Accepted publickey for root from 201.xx.xx..xx port 10098 ssh2: RSA SHA256:lKNW3g4Mo0HUKolRlv9EmSACaMfb/6xzRCjWDRLudGs
Aug 10 22:49:58 prometheus2 sshd[5318]: pam_unix(sshd:session): session opened for user root by (uid=0)
Aug 10 22:49:59 prometheus2 sshd[5318]: Received disconnect from 201.xx.xx..xx port 10098:11: disconnected by user
Aug 10 22:49:59 prometheus2 sshd[5318]: Disconnected from user root 201.xx.xx..xx port 10098
Aug 10 22:49:59 prometheus2 sshd[5318]: pam_unix(sshd:session): session closed for user root

also i got this log

Code:
root@prometheus2:~# pve-zsync sync --source 110 --dest 190.xx.xx:rpool/data/syp --name drphercules --maxsnap 7 --verbose
send from @rep_drphercules_2021-08-10_22:46:12 to rpool/data/vm-110-disk-1@rep_drphercules_2021-08-10_22:49:03 estimated size is 250K
total estimated size is 250K
TIME        SENT   SNAPSHOT rpool/data/vm-110-disk-1@rep_drphercules_2021-08-10_22:49:03
send from @rep_drphercules_2021-08-10_20:08:52 to rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03 estimated size is 84.4M
total estimated size is 84.4M
TIME        SENT   SNAPSHOT rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:15   3.05M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:16   3.05M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:17   6.23M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:18   8.74M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:19   12.1M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:20   14.4M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:21   16.6M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:22   18.9M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:23   21.1M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:24   23.4M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:25   25.5M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:26   27.7M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:27   30.0M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:28   32.1M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:29   34.6M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:30   37.2M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:31   40.6M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:32   43.2M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:33   45.9M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:34   49.6M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:35   53.1M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:36   56.0M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:37   58.1M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:38   60.4M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:39   62.6M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:40   64.9M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:41   67.0M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:42   69.2M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:43   71.6M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
22:49:44   73.9M   rpool/data/vm-110-disk-0@rep_drphercules_2021-08-10_22:49:03
send from @rep_drphercules_2021-08-10_20:08:52 to rpool/data/vm-110-disk-2@rep_drphercules_2021-08-10_22:49:03 estimated size is 753K
total estimated size is 753K
TIME        SENT   SNAPSHOT rpool/data/vm-110-disk-2@rep_drphercules_2021-08-10_22:49:03
full send of rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03 estimated size is 2.25G
total estimated size is 2.25G
TIME        SENT   SNAPSHOT rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:00   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:01   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:02   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:03   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:04   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:05   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:06   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:07   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:08   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:09   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:10   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:11   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:12   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:13   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:14   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:15   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:16   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:17   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:18   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:19   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:20   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:21   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:22   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:23   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:24   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:25   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:26   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:27   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:28   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:29   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:30   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:31   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:32   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:33   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:34   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:35   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:36   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:37   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:38   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:39   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:40   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:41   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:42   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:43   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:44   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:45   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:46   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:47   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:48   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:49   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:50   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:51   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:52   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:53   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:54   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:55   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:56   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:57   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:58   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:52:59   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:00   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:01   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:02   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:03   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:04   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:05   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:06   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:07   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:08   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:09   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:10   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:11   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:12   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:13   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:14   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:15   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:16   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:17   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:18   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:19   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:20   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:21   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:22   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:23   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:24   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:25   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:26   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:27   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:28   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:29   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:30   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:31   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:32   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:33   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:34   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:35   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:36   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:37   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:38   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:39   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:40   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:41   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:42   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:43   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:44   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:45   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:46   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:47   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:48   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:49   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:50   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:51   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:52   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:53   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:54   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:55   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:56   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:57   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:58   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
22:53:59   44.6K   rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03
Job --source 110 --name drphercules got an ERROR!!!
ERROR Message:
COMMAND:
        zfs send -v -- rpool/data/vm-110-disk-3@rep_drphercules_2021-08-10_22:49:03 | ssh -o 'BatchMode=yes' root@190.zz.zz -- zfs recv -F -- rpool/data/syp/vm-110-disk-3
GET ERROR:
        Connection closed by 190.xx.xx port 22
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!