Hello @ all,
I did tested now pve-zsync to make some backups on my new backup server but have problems with understanding the used space of the volume backup.
Here the config of the node:
And here the config of the backup server:
Here now the vol info from the node:
And now the vol info from the backup server:
As you can see, the difference in "used space" is much higher.
Any ideas about the reason for this?
Thanks a lot in advance
I did tested now pve-zsync to make some backups on my new backup server but have problems with understanding the used space of the volume backup.
Here the config of the node:
Code:
NAME PROPERTY VALUE SOURCE
rpool size 6.94T -
rpool capacity 53% -
rpool altroot - default
rpool health ONLINE -
rpool guid 10070505588150836828 -
rpool version - default
rpool bootfs rpool/ROOT/pve-1 local
rpool delegation on default
rpool autoreplace off default
rpool cachefile - default
rpool failmode wait default
rpool listsnapshots off default
rpool autoexpand off default
rpool dedupditto 0 default
rpool dedupratio 1.00x -
rpool free 3.25T -
rpool allocated 3.69T -
rpool readonly off -
rpool ashift 12 local
rpool comment - default
rpool expandsize - -
rpool freeing 0 -
rpool fragmentation 21% -
rpool leaked 0 -
rpool multihost off default
rpool feature@async_destroy enabled local
rpool feature@empty_bpobj active local
rpool feature@lz4_compress active local
rpool feature@multi_vdev_crash_dump enabled local
rpool feature@spacemap_histogram active local
rpool feature@enabled_txg active local
rpool feature@hole_birth active local
rpool feature@extensible_dataset active local
rpool feature@embedded_data active local
rpool feature@bookmarks enabled local
rpool feature@filesystem_limits enabled local
rpool feature@large_blocks enabled local
rpool feature@large_dnode enabled local
rpool feature@sha512 enabled local
rpool feature@skein enabled local
rpool feature@edonr enabled local
rpool feature@userobj_accounting active local
Code:
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda3 ONLINE 0 0 0
sdb3 ONLINE 0 0 0
sdc3 ONLINE 0 0 0
sdd3 ONLINE 0 0 0
sde3 ONLINE 0 0 0
sdf3 ONLINE 0 0 0
sdg3 ONLINE 0 0 0
sdh3 ONLINE 0 0 0
And here the config of the backup server:
Code:
storage size 72.5T -
storage capacity 30% -
storage altroot - default
storage health ONLINE -
storage guid 11321045570636972644 -
storage version - default
storage bootfs - default
storage delegation on default
storage autoreplace off default
storage cachefile - default
storage failmode wait default
storage listsnapshots off default
storage autoexpand off default
storage dedupditto 0 default
storage dedupratio 1.00x -
storage free 50.1T -
storage allocated 22.4T -
storage readonly off -
storage ashift 12 local
storage comment - default
storage expandsize - -
storage freeing 0 -
storage fragmentation 0% -
storage leaked 0 -
storage multihost off default
storage feature@async_destroy enabled local
storage feature@empty_bpobj active local
storage feature@lz4_compress active local
storage feature@multi_vdev_crash_dump enabled local
storage feature@spacemap_histogram active local
storage feature@enabled_txg active local
storage feature@hole_birth active local
storage feature@extensible_dataset active local
storage feature@embedded_data active local
storage feature@bookmarks enabled local
storage feature@filesystem_limits enabled local
storage feature@large_blocks enabled local
storage feature@large_dnode enabled local
storage feature@sha512 enabled local
storage feature@skein enabled local
storage feature@edonr enabled local
storage feature@userobj_accounting active local
Code:
pool: storage
state: ONLINE
scan: scrub repaired 0B in 0h0m with 0 errors on Sun Jan 13 00:24:03 2019
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
cache
nvme0n1 ONLINE 0 0 0
Here now the vol info from the node:
Code:
root@node:/etc/cron.d# zfs get all rpool/data/vm-101-disk-0
NAME PROPERTY VALUE SOURCE
rpool/data/vm-101-disk-0 type volume -
rpool/data/vm-101-disk-0 creation Fri Jan 25 18:22 2019 -
rpool/data/vm-101-disk-0 used 59.3G -
rpool/data/vm-101-disk-0 available 2.55T -
rpool/data/vm-101-disk-0 referenced 58.9G -
rpool/data/vm-101-disk-0 compressratio 1.13x -
rpool/data/vm-101-disk-0 reservation none default
rpool/data/vm-101-disk-0 volsize 100G local
rpool/data/vm-101-disk-0 volblocksize 8K default
rpool/data/vm-101-disk-0 checksum on default
rpool/data/vm-101-disk-0 compression on inherited from rpool
rpool/data/vm-101-disk-0 readonly off default
rpool/data/vm-101-disk-0 createtxg 18685 -
rpool/data/vm-101-disk-0 copies 1 default
rpool/data/vm-101-disk-0 refreservation none default
rpool/data/vm-101-disk-0 guid 1003074226894478923 -
rpool/data/vm-101-disk-0 primarycache all default
rpool/data/vm-101-disk-0 secondarycache all default
rpool/data/vm-101-disk-0 usedbysnapshots 437M -
rpool/data/vm-101-disk-0 usedbydataset 58.9G -
rpool/data/vm-101-disk-0 usedbychildren 0B -
rpool/data/vm-101-disk-0 usedbyrefreservation 0B -
rpool/data/vm-101-disk-0 logbias latency default
rpool/data/vm-101-disk-0 dedup off default
rpool/data/vm-101-disk-0 mlslabel none default
rpool/data/vm-101-disk-0 sync standard inherited from rpool
rpool/data/vm-101-disk-0 refcompressratio 1.13x -
rpool/data/vm-101-disk-0 written 381M -
rpool/data/vm-101-disk-0 logicalused 39.9G -
rpool/data/vm-101-disk-0 logicalreferenced 39.5G -
rpool/data/vm-101-disk-0 volmode default default
rpool/data/vm-101-disk-0 snapshot_limit none default
rpool/data/vm-101-disk-0 snapshot_count none default
rpool/data/vm-101-disk-0 snapdev hidden default
rpool/data/vm-101-disk-0 context none default
rpool/data/vm-101-disk-0 fscontext none default
rpool/data/vm-101-disk-0 defcontext none default
rpool/data/vm-101-disk-0 rootcontext none default
rpool/data/vm-101-disk-0 redundant_metadata all default
And now the vol info from the backup server:
Code:
root@storage:~# zfs get all storage/backups/node/vm-101-disk-0
NAME PROPERTY VALUE SOURCE
storage/backups/node/vm-101-disk-0 type volume -
storage/backups/node/vm-101-disk-0 creation Fri Feb 1 1:00 2019 -
storage/backups/node/vm-101-disk-0 used 74.5G -
storage/backups/node/vm-101-disk-0 available 34.0T -
storage/backups/node/vm-101-disk-0 referenced 74.4G -
storage/backups/node/vm-101-disk-0 compressratio 1.13x -
storage/backups/node/vm-101-disk-0 reservation none default
storage/backups/node/vm-101-disk-0 volsize 100G local
storage/backups/node/vm-101-disk-0 volblocksize 8K default
storage/backups/node/vm-101-disk-0 checksum on default
storage/backups/node/vm-101-disk-0 compression lz4 inherited from storage
storage/backups/node/vm-101-disk-0 readonly off default
storage/backups/node/vm-101-disk-0 createtxg 382515 -
storage/backups/node/vm-101-disk-0 copies 1 default
storage/backups/node/vm-101-disk-0 refreservation none default
storage/backups/node/vm-101-disk-0 guid 13724468318431351622 -
storage/backups/node/vm-101-disk-0 primarycache all default
storage/backups/node/vm-101-disk-0 secondarycache all default
storage/backups/node/vm-101-disk-0 usedbysnapshots 103M -
storage/backups/node/vm-101-disk-0 usedbydataset 74.4G -
storage/backups/node/vm-101-disk-0 usedbychildren 0B -
storage/backups/node/vm-101-disk-0 usedbyrefreservation 0B -
storage/backups/node/vm-101-disk-0 logbias latency default
storage/backups/node/vm-101-disk-0 dedup off default
storage/backups/node/vm-101-disk-0 mlslabel none default
storage/backups/node/vm-101-disk-0 sync standard default
storage/backups/node/vm-101-disk-0 refcompressratio 1.13x -
storage/backups/node/vm-101-disk-0 written 0 -
storage/backups/node/vm-101-disk-0 logicalused 39.5G -
storage/backups/node/vm-101-disk-0 logicalreferenced 39.4G -
storage/backups/node/vm-101-disk-0 volmode default default
storage/backups/node/vm-101-disk-0 snapshot_limit none default
storage/backups/node/vm-101-disk-0 snapshot_count none default
storage/backups/node/vm-101-disk-0 snapdev hidden default
storage/backups/node/vm-101-disk-0 context none default
storage/backups/node/vm-101-disk-0 fscontext none default
storage/backups/node/vm-101-disk-0 defcontext none default
storage/backups/node/vm-101-disk-0 rootcontext none default
storage/backups/node/vm-101-disk-0 redundant_metadata all default
As you can see, the difference in "used space" is much higher.
Any ideas about the reason for this?
Thanks a lot in advance