Hello,
we are having problem with host replication.
The host have 16GB disk, which is about 25% full. But when replication starts,after 2GB trasnfered it ends with an error
The error is about space quota, but we have not assigned any space quota.
Where can be the problem?
Thank you in advance.
we are having problem with host replication.
The host have 16GB disk, which is about 25% full. But when replication starts,after 2GB trasnfered it ends with an error
cannot receive incremental stream: destination rpool/data/subvol-150-disk-0 space quota exceeded
The error is about space quota, but we have not assigned any space quota.
Where can be the problem?
Thank you in advance.
2021-04-22 11:25:02 150-0: start replication job
2021-04-22 11:25:02 150-0: guest => CT 150, running => 1
2021-04-22 11:25:02 150-0: volumes => local-zfs:subvol-150-disk-0
2021-04-22 11:25:02 150-0: freeze guest filesystem
2021-04-22 11:25:02 150-0: create snapshot '__replicate_150-0_1619083500__' on local-zfs:subvol-150-disk-0
2021-04-22 11:25:02 150-0: thaw guest filesystem
2021-04-22 11:25:02 150-0: using secure transmission, rate limit: 10 MByte/s
2021-04-22 11:25:02 150-0: incremental sync 'local-zfs:subvol-150-disk-0' (__replicate_150-0_1617300025__ => __replicate_150-0_1619083500__)
2021-04-22 11:25:02 150-0: using a bandwidth limit of 10000000 bps for transferring 'local-zfs:subvol-150-disk-0'
2021-04-22 11:25:03 150-0: send from @__replicate_150-0_1617300025__ to rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__ estimated size is 2.01G
2021-04-22 11:25:03 150-0: total estimated size is 2.01G
2021-04-22 11:25:04 150-0: TIME SENT SNAPSHOT rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:25:04 150-0: 11:25:04 14.6M rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:25:05 150-0: 11:25:05 24.1M rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:25:06 150-0: 11:25:06 33.6M rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
...
2021-04-22 11:28:39 150-0: 11:28:39 2.02G rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:28:40 150-0: 11:28:40 2.02G rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:28:41 150-0: 11:28:41 2.02G rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:28:47 150-0: cannot receive incremental stream: destination rpool/data/subvol-150-disk-0 space quota exceeded.
2021-04-22 11:28:47 150-0: cannot rollback 'rpool/data/subvol-150-disk-0': out of space
2021-04-22 11:28:47 150-0: command 'zfs recv -F -- rpool/data/subvol-150-disk-0' failed: exit code 1
2021-04-22 11:28:47 150-0: delete previous replication snapshot '__replicate_150-0_1619083500__' on local-zfs:subvol-150-disk-0
2021-04-22 11:28:47 150-0: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:subvol-150-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_150-0_1619083500__ -base __replicate_150-0_1617300025__ | /usr/bin/cstream -t 10000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=backup' root@192.168.0.18 -- pvesm import local-zfs:subvol-150-disk-0 zfs - -with-snapshots 1 -allow-rename 0 -base __replicate_150-0_1617300025__' failed: exit code 1
2021-04-22 11:25:02 150-0: guest => CT 150, running => 1
2021-04-22 11:25:02 150-0: volumes => local-zfs:subvol-150-disk-0
2021-04-22 11:25:02 150-0: freeze guest filesystem
2021-04-22 11:25:02 150-0: create snapshot '__replicate_150-0_1619083500__' on local-zfs:subvol-150-disk-0
2021-04-22 11:25:02 150-0: thaw guest filesystem
2021-04-22 11:25:02 150-0: using secure transmission, rate limit: 10 MByte/s
2021-04-22 11:25:02 150-0: incremental sync 'local-zfs:subvol-150-disk-0' (__replicate_150-0_1617300025__ => __replicate_150-0_1619083500__)
2021-04-22 11:25:02 150-0: using a bandwidth limit of 10000000 bps for transferring 'local-zfs:subvol-150-disk-0'
2021-04-22 11:25:03 150-0: send from @__replicate_150-0_1617300025__ to rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__ estimated size is 2.01G
2021-04-22 11:25:03 150-0: total estimated size is 2.01G
2021-04-22 11:25:04 150-0: TIME SENT SNAPSHOT rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:25:04 150-0: 11:25:04 14.6M rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:25:05 150-0: 11:25:05 24.1M rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:25:06 150-0: 11:25:06 33.6M rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
...
2021-04-22 11:28:39 150-0: 11:28:39 2.02G rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:28:40 150-0: 11:28:40 2.02G rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:28:41 150-0: 11:28:41 2.02G rpool/data/subvol-150-disk-0@__replicate_150-0_1619083500__
2021-04-22 11:28:47 150-0: cannot receive incremental stream: destination rpool/data/subvol-150-disk-0 space quota exceeded.
2021-04-22 11:28:47 150-0: cannot rollback 'rpool/data/subvol-150-disk-0': out of space
2021-04-22 11:28:47 150-0: command 'zfs recv -F -- rpool/data/subvol-150-disk-0' failed: exit code 1
2021-04-22 11:28:47 150-0: delete previous replication snapshot '__replicate_150-0_1619083500__' on local-zfs:subvol-150-disk-0
2021-04-22 11:28:47 150-0: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:subvol-150-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_150-0_1619083500__ -base __replicate_150-0_1617300025__ | /usr/bin/cstream -t 10000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=backup' root@192.168.0.18 -- pvesm import local-zfs:subvol-150-disk-0 zfs - -with-snapshots 1 -allow-rename 0 -base __replicate_150-0_1617300025__' failed: exit code 1
root@prox4:~$ zfs get space rpool/data/subvol-150-disk-0
NAME PROPERTY VALUE SOURCE
rpool/data/subvol-150-disk-0 name rpool/data/subvol-150-disk-0 -
rpool/data/subvol-150-disk-0 available 12.2G -
rpool/data/subvol-150-disk-0 used 3.82G -
rpool/data/subvol-150-disk-0 usedbysnapshots 16.5M -
rpool/data/subvol-150-disk-0 usedbydataset 3.80G -
rpool/data/subvol-150-disk-0 usedbyrefreservation 0B -
rpool/data/subvol-150-disk-0 usedbychildren 0B -
root@prox4:~$ zfs get quota rpool/data/subvol-150-disk-0
NAME PROPERTY VALUE SOURCE
rpool/data/subvol-150-disk-0 quota none default
NAME PROPERTY VALUE SOURCE
rpool/data/subvol-150-disk-0 name rpool/data/subvol-150-disk-0 -
rpool/data/subvol-150-disk-0 available 12.2G -
rpool/data/subvol-150-disk-0 used 3.82G -
rpool/data/subvol-150-disk-0 usedbysnapshots 16.5M -
rpool/data/subvol-150-disk-0 usedbydataset 3.80G -
rpool/data/subvol-150-disk-0 usedbyrefreservation 0B -
rpool/data/subvol-150-disk-0 usedbychildren 0B -
root@prox4:~$ zfs get quota rpool/data/subvol-150-disk-0
NAME PROPERTY VALUE SOURCE
rpool/data/subvol-150-disk-0 quota none default
root@backup:~$ zfs get quota rpool/data/subvol-150-disk-0
NAME PROPERTY VALUE SOURCE
rpool/data/subvol-150-disk-0 quota none default
NAME PROPERTY VALUE SOURCE
rpool/data/subvol-150-disk-0 quota none default
root@prox4:~$ cat /etc/pve/nodes/prox4/lxc/150.conf
#192.168.0.65
#
#state 27/11/2019%3A asi na tom bezi zaznam packetu z rPi na Barakove a Milicove ohledne prejezdu LCA; taky webova aplikace od @lmezl
arch: amd64
cores: 1
hostname: lxc-lca
memory: 1024
nameserver: 192.168.7.2
net0: name=eth0,bridge=vmbr0,gw=192.168.0.1,hwaddr=5E:E0:C4:3A:6C:B2,ip=192.168.0.65/24,ip6=auto,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-150-disk-0,size=16G
searchdomain: lan.cutter.cz
startup: up=10
swap: 1024
#192.168.0.65
#
#state 27/11/2019%3A asi na tom bezi zaznam packetu z rPi na Barakove a Milicove ohledne prejezdu LCA; taky webova aplikace od @lmezl
arch: amd64
cores: 1
hostname: lxc-lca
memory: 1024
nameserver: 192.168.7.2
net0: name=eth0,bridge=vmbr0,gw=192.168.0.1,hwaddr=5E:E0:C4:3A:6C:B2,ip=192.168.0.65/24,ip6=auto,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-150-disk-0,size=16G
searchdomain: lan.cutter.cz
startup: up=10
swap: 1024