ZFS replication gives "our of space"

greg

Renowned Member
Apr 6, 2011
137
2
83
Greetings
I'm trying to have a replication job between two nodes of a cluster, like this:

Code:
pvesr run -id 111-2 -verbose

After transfering 173G, the process stops with "cannot receive incremental stream: out of space". The free space on the receiving end is 1.65T.

What am I doing wrong?

Thanks in advance

Regards
 
please post the VM config, as well as 'zpool status' on both nodes, and 'zfs get all $DATASET' on both nodes for all disks of that VM.
 
Thanks for your answer. Here are the info:


VM config:

arch: amd64
cores: 1
hostname: xxxxx
memory: 1000
net0: name=eth1,bridge=vmbr1,hwaddr=xxxx,ip=xxxxx,type=veth
net1: name=eth0,bridge=vmbr0,gw=xxxxx,hwaddr=xxxxx,ip=xxxx,type=veth
net2: name=eth3,bridge=vmbr0,firewall=1,gw=xxxx4,hwaddr=xxxx,ip=xxxx,type=veth
onboot: 1
ostype: debian
parent: daily_20181126
rootfs: ct_B:subvol-111-disk-1,size=1000G
swap: 4000
unprivileged: 1


not sure the meaning of the "parent" field, there are no more snapshot for this VM.

On the source :

# zpool status
pool: ct_B
state: ONLINE
scan: scrub repaired 0B in 16h45m with 0 errors on Sun Jul 12 17:09:03 2020
config:

NAME STATE READ WRITE CKSUM
ct_B ONLINE 0 0 0
ata-HGST_HUS7240xxxxxX-part1 ONLINE 0 0 0
ata-HGST_HUS72402xxxxxxpart1 ONLINE 0 0 0



NAME PROPERTY VALUE SOURCE
ct_B/ct/subvol-111-disk-1 type filesystem -
ct_B/ct/subvol-111-disk-1 creation mar. oct. 30 14:43 2018 -
ct_B/ct/subvol-111-disk-1 used 1,83T -
ct_B/ct/subvol-111-disk-1 available 893G -
ct_B/ct/subvol-111-disk-1 referenced 1,82T -
ct_B/ct/subvol-111-disk-1 compressratio 1.00x -
ct_B/ct/subvol-111-disk-1 mounted yes -
ct_B/ct/subvol-111-disk-1 quota none default
ct_B/ct/subvol-111-disk-1 reservation none default
ct_B/ct/subvol-111-disk-1 recordsize 128K default
ct_B/ct/subvol-111-disk-1 mountpoint /ct_B/ct/subvol-111-disk-1 default
ct_B/ct/subvol-111-disk-1 sharenfs off default
ct_B/ct/subvol-111-disk-1 checksum on default
ct_B/ct/subvol-111-disk-1 compression off default
ct_B/ct/subvol-111-disk-1 atime on default
ct_B/ct/subvol-111-disk-1 devices on default
ct_B/ct/subvol-111-disk-1 exec on default
ct_B/ct/subvol-111-disk-1 setuid on default
ct_B/ct/subvol-111-disk-1 readonly off default
ct_B/ct/subvol-111-disk-1 zoned off default
ct_B/ct/subvol-111-disk-1 snapdir hidden default
ct_B/ct/subvol-111-disk-1 aclinherit restricted default
ct_B/ct/subvol-111-disk-1 createtxg 176767 -
ct_B/ct/subvol-111-disk-1 canmount on default
ct_B/ct/subvol-111-disk-1 xattr on default
ct_B/ct/subvol-111-disk-1 copies 1 default
ct_B/ct/subvol-111-disk-1 version 5 -
ct_B/ct/subvol-111-disk-1 utf8only off -
ct_B/ct/subvol-111-disk-1 normalization none -
ct_B/ct/subvol-111-disk-1 casesensitivity sensitive -
ct_B/ct/subvol-111-disk-1 vscan off default
ct_B/ct/subvol-111-disk-1 nbmand off default
ct_B/ct/subvol-111-disk-1 sharesmb off default
ct_B/ct/subvol-111-disk-1 refquota none default
ct_B/ct/subvol-111-disk-1 refreservation none default
ct_B/ct/subvol-111-disk-1 guid 947576344036297752 -
ct_B/ct/subvol-111-disk-1 primarycache all default
ct_B/ct/subvol-111-disk-1 secondarycache all default
ct_B/ct/subvol-111-disk-1 usedbysnapshots 11,1G -
ct_B/ct/subvol-111-disk-1 usedbydataset 1,82T -
ct_B/ct/subvol-111-disk-1 usedbychildren 0B -
ct_B/ct/subvol-111-disk-1 usedbyrefreservation 0B -
ct_B/ct/subvol-111-disk-1 logbias latency default
ct_B/ct/subvol-111-disk-1 dedup off default
ct_B/ct/subvol-111-disk-1 mlslabel none default
ct_B/ct/subvol-111-disk-1 sync standard default
ct_B/ct/subvol-111-disk-1 dnodesize legacy default
ct_B/ct/subvol-111-disk-1 refcompressratio 1.00x -
ct_B/ct/subvol-111-disk-1 written 4,50M -
ct_B/ct/subvol-111-disk-1 logicalused 1,83T -
ct_B/ct/subvol-111-disk-1 logicalreferenced 1,82T -
ct_B/ct/subvol-111-disk-1 volmode default default
ct_B/ct/subvol-111-disk-1 filesystem_limit none default
ct_B/ct/subvol-111-disk-1 snapshot_limit none default
ct_B/ct/subvol-111-disk-1 filesystem_count none default
ct_B/ct/subvol-111-disk-1 snapshot_count none default
ct_B/ct/subvol-111-disk-1 snapdev hidden default
ct_B/ct/subvol-111-disk-1 acltype off default
ct_B/ct/subvol-111-disk-1 context none default
ct_B/ct/subvol-111-disk-1 fscontext none default
ct_B/ct/subvol-111-disk-1 defcontext none default
ct_B/ct/subvol-111-disk-1 rootcontext none default
ct_B/ct/subvol-111-disk-1 relatime off default
ct_B/ct/subvol-111-disk-1 redundant_metadata all default
ct_B/ct/subvol-111-disk-1 overlay off default

on the starget:

# zpool status
pool: ct_B
state: ONLINE
scan: scrub repaired 0B in 0 days 01:49:20 with 0 errors on Sun Jul 12 02:13:22 2020
config:

NAME STATE READ WRITE CKSUM
ct_B ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-HGST_HUS72602xxxxx-part3 ONLINE 0 0 0
ata-HGST_HUS72602xxxxx-part3 ONLINE 0 0 0
ata-HGST_HUS72602xxxxx-part3 ONLINE 0 0 0


On the target the zfs fs doesn't exists prior to the sync.
 
your source dataset is 1.8T big..
 
yes the target is supposed to have 2T more free... I'm assuming there's a background copy process I missed somehow... I created the replication task in the GUI so I guess it's been automatically queued and my manual run is interfering...
what's really weird is that I ran manually because the GUI told me the task failed for some obscure reason ("command 'set -o pipefail && pvesm export").
I will let it finish and see ...

Thanks a lot!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!