PVE-ZSYNC uncorrect ZFS storage values?

killmasta93

Renowned Member
Aug 13, 2017
973
58
68
31
Hi,
I was wondering if someone else has had any ideas of this issue.
Currently i have PVE-ZSYNC running but the values dont seem to add up

host 1 which sends to host 2
Code:
NAME   PROPERTY              VALUE                  SOURCE
rpool  type                  filesystem             -
rpool  creation              Wed Dec 19 17:26 2018  -
rpool  used                  237G                   -
rpool  available             1.52T                  -
rpool  referenced            96K                    -
rpool  compressratio         1.21x                  -
rpool  mounted               yes                    -
rpool  quota                 none                   default
rpool  reservation           none                   default
rpool  recordsize            128K                   default
rpool  mountpoint            /rpool                 default
rpool  sharenfs              off                    default
rpool  checksum              on                     default
rpool  compression           on                     local
rpool  atime                 off                    local
rpool  devices               on                     default
rpool  exec                  on                     default
rpool  setuid                on                     default
rpool  readonly              off                    default
rpool  zoned                 off                    default
rpool  snapdir               hidden                 default
rpool  aclinherit            restricted             default
rpool  createtxg             1                      -
rpool  canmount              on                     default
rpool  xattr                 on                     default
rpool  copies                1                      default
rpool  version               5                      -
rpool  utf8only              off                    -
rpool  normalization         none                   -
rpool  casesensitivity       sensitive              -
rpool  vscan                 off                    default
rpool  nbmand                off                    default
rpool  sharesmb              off                    default
rpool  refquota              none                   default
rpool  refreservation        none                   default
rpool  guid                  18350704558894867567   -
rpool  primarycache          all                    default
rpool  secondarycache        all                    default
rpool  usedbysnapshots       0B                     -
rpool  usedbydataset         96K                    -
rpool  usedbychildren        237G                   -
rpool  usedbyrefreservation  0B                     -
rpool  logbias               latency                default
rpool  dedup                 off                    default
rpool  mlslabel              none                   default
rpool  sync                  disabled               local
rpool  dnodesize             legacy                 default
rpool  refcompressratio      1.00x                  -
rpool  written               96K                    -
rpool  logicalused           283G                   -
rpool  logicalreferenced     40K                    -
rpool  volmode               default                default
rpool  filesystem_limit      none                   default
rpool  snapshot_limit        none                   default
rpool  filesystem_count      none                   default
rpool  snapshot_count        none                   default
rpool  snapdev               hidden                 default
rpool  acltype               off                    default
rpool  context               none                   default
rpool  fscontext             none                   default
rpool  defcontext            none                   default
rpool  rootcontext           none                   default
rpool  relatime              off                    default
rpool  redundant_metadata    all                    default
rpool  overlay               off                    default

Code:
rpool/data/vm-100-disk-1  refreservation  none       default
rpool/data/vm-100-disk-1  compression     on         inherited from rpool
rpool/data/vm-100-disk-1  volblocksize    8K         default
rpool/data/vm-100-disk-1  used            76.6G      -
rpool/data/vm-100-disk-1  volsize         100G       local
rpool/data/vm-100-disk-2  refreservation  none       default
rpool/data/vm-100-disk-2  compression     on         inherited from rpool
rpool/data/vm-100-disk-2  volblocksize    8K         default
rpool/data/vm-100-disk-2  used            89.8G      -
rpool/data/vm-100-disk-2  volsize         932G       local
rpool/data/vm-101-disk-1  refreservation  none       default
rpool/data/vm-101-disk-1  compression     on         inherited from rpool
rpool/data/vm-101-disk-1  volblocksize    8K         default
rpool/data/vm-101-disk-1  used            957M       -
rpool/data/vm-101-disk-1  volsize         128G       local

Code:
rpool/data                 167G  1.52T    96K  /rpool/data
rpool/data/vm-100-disk-1  76.6G  1.52T  76.4G  -
rpool/data/vm-100-disk-2  89.7G  1.52T  88.2G  -
rpool/data/vm-101-disk-1   957M  1.52T   875M  -


and on host 2

Code:
NAME                               PROPERTY        VALUE      SOURCE
rpool/data/nucleo/vm-100-disk-1    refreservation  none       default
rpool/data/nucleo/vm-100-disk-1    compression     on         inherited from rpool
rpool/data/nucleo/vm-100-disk-1    volblocksize    8K         default
rpool/data/nucleo/vm-100-disk-1    used            111G       -
rpool/data/nucleo/vm-100-disk-1    volsize         100G       local
rpool/data/nucleo/vm-100-disk-2    refreservation  none       default
rpool/data/nucleo/vm-100-disk-2    compression     on         inherited from rpool
rpool/data/nucleo/vm-100-disk-2    volblocksize    8K         default
rpool/data/nucleo/vm-100-disk-2    used            128G       -
rpool/data/nucleo/vm-100-disk-2    volsize         932G       local
rpool/data/nucleo/vm-101-disk-1    refreservation  none       default
rpool/data/nucleo/vm-101-disk-1    compression     on         inherited from rpool
rpool/data/nucleo/vm-101-disk-1    volblocksize    8K         default
rpool/data/nucleo/vm-101-disk-1    used            1.26G      -
rpool/data/nucleo/vm-101-disk-1    volsize         128G       local

Code:
NAME   PROPERTY                       VALUE                          SOURCE
rpool  size                           7.27T                          -
rpool  capacity                       5%                             -
rpool  altroot                        -                              default
rpool  health                         ONLINE                         -
rpool  guid                           7662771843657390416            -
rpool  version                        -                              default
rpool  bootfs                         rpool/ROOT/pve-1               local
rpool  delegation                     on                             default
rpool  autoreplace                    off                            default
rpool  cachefile                      -                              default
rpool  failmode                       wait                           default
rpool  listsnapshots                  off                            default
rpool  autoexpand                     off                            default
rpool  dedupditto                     0                              default
rpool  dedupratio                     1.00x                          -
rpool  free                           6.85T                          -
rpool  allocated                      422G                           -
rpool  readonly                       off                            -
rpool  ashift                         12                             local
rpool  comment                        -                              default
rpool  expandsize                     -                              -
rpool  freeing                        0                              -
rpool  fragmentation                  0%                             -
rpool  leaked                         0                              -
rpool  multihost                      off                            default
rpool  checkpoint                     -                              -
rpool  load_guid                      14600233907018335643           -
rpool  autotrim                       off                            default
rpool  feature@async_destroy          enabled                        local
rpool  feature@empty_bpobj            active                         local
rpool  feature@lz4_compress           active                         local
rpool  feature@multi_vdev_crash_dump  enabled                        local
rpool  feature@spacemap_histogram     active                         local
rpool  feature@enabled_txg            active                         local
rpool  feature@hole_birth             active                         local
rpool  feature@extensible_dataset     active                         local
rpool  feature@embedded_data          active                         local
rpool  feature@bookmarks              enabled                        local
rpool  feature@filesystem_limits      enabled                        local
rpool  feature@large_blocks           enabled                        local
rpool  feature@large_dnode            enabled                        local
rpool  feature@sha512                 enabled                        local
rpool  feature@skein                  enabled                        local
rpool  feature@edonr                  enabled                        local
rpool  feature@userobj_accounting     active                         local
rpool  feature@encryption             enabled                        local
rpool  feature@project_quota          active                         local
rpool  feature@device_removal         enabled                        local
rpool  feature@obsolete_counts        enabled                        local
rpool  feature@zpool_checkpoint       enabled                        local
rpool  feature@spacemap_v2            active                         local
rpool  feature@allocation_classes     enabled                        local
rpool  feature@resilver_defer         enabled                        local
rpool  feature@bookmark_v2            enabled                        local

Code:
rpool/data/nucleo                   240G  4.81T      140K  /rpool/data/nucleo
rpool/data/nucleo/vm-100-disk-1     111G  4.81T      111G  -
rpool/data/nucleo/vm-100-disk-2     128G  4.81T      128G  -
rpool/data/nucleo/vm-101-disk-1    1.26G  4.81T     1.24G  -
 
Hi,
what does zfs list -o space -t all say on both source and target?
 
Thanks for the reply,

Host 1

Code:
root@prometheus:~# zfs list -o space -t all
NAME                                                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool                                                         1.52T   238G        0B     96K             0B       238G
rpool/ROOT                                                    1.52T  60.7G        0B     96K             0B      60.7G
rpool/ROOT/pve-1                                              1.52T  60.7G        0B   60.7G             0B         0B
rpool/data                                                    1.52T   169G        0B     96K             0B       169G
rpool/data/vm-100-disk-1                                      1.52T  76.7G      212M   76.5G             0B         0B
rpool/data/vm-100-disk-1@pyznap_2020-10-12_10:00:01_frequent      -  1.56M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_11:00:01_frequent      -  1.35M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_12:00:02_frequent      -  1.08M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_13:00:02_frequent      -  1.98M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_14:00:01_frequent      -  1.70M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_15:00:02_frequent      -  1.42M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_16:00:01_frequent      -  1.26M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_17:00:01_frequent      -  1.48M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_18:00:02_frequent      -  1.48M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_19:00:01_frequent      -  1.50M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_20:00:01_frequent      -  1.60M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_21:00:01_frequent      -  1.52M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_22:00:01_frequent      -  1.76M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-12_23:00:01_frequent      -  1.79M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_00:00:01_frequent      -  1.50M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_01:00:01_frequent      -  1.32M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_02:00:01_frequent      -  1.34M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_03:00:02_frequent      -  1.45M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_04:00:01_frequent      -  1.60M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_05:00:01_frequent      -  1.56M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_06:00:01_frequent      -  1.58M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_07:00:01_frequent      -  1.33M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_08:00:02_frequent      -  1.34M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_09:00:01_frequent      -     0B         -       -              -          -
rpool/data/vm-100-disk-1@rep_bakzeus_2020-10-13_09:00:01          -     0B         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_10:00:01_frequent      -   344K         -       -              -          -
rpool/data/vm-100-disk-1@rep_bakzeus_2020-10-13_10:00:01          -   348K         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_11:00:01_frequent      -  1.05M         -       -              -          -
rpool/data/vm-100-disk-1@rep_bakzeus_2020-10-13_11:00:10          -  1.05M         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_12:00:01_frequent      -   416K         -       -              -          -
rpool/data/vm-100-disk-1@rep_bakzeus_2020-10-13_12:00:09          -   416K         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_13:00:01_frequent      -   252K         -       -              -          -
rpool/data/vm-100-disk-1@rep_bakzeus_2020-10-13_13:00:09          -   252K         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_14:00:01_frequent      -   548K         -       -              -          -
rpool/data/vm-100-disk-1@rep_bakzeus_2020-10-13_14:00:01          -   288K         -       -              -          -
rpool/data/vm-100-disk-1@pyznap_2020-10-13_15:00:02_frequent      -     0B         -       -              -          -
rpool/data/vm-100-disk-1@rep_bakzeus_2020-10-13_15:00:01          -     0B         -       -              -          -
rpool/data/vm-100-disk-2                                      1.52T  90.7G     2.71G   88.0G             0B         0B
rpool/data/vm-100-disk-2@pyznap_2020-10-12_10:00:01_frequent      -  34.0M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_11:00:01_frequent      -  18.8M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_12:00:02_frequent      -  16.9M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_13:00:02_frequent      -  19.3M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_14:00:01_frequent      -  19.3M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_15:00:02_frequent      -  18.4M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_16:00:01_frequent      -  18.3M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_17:00:01_frequent      -  19.2M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_18:00:02_frequent      -  18.8M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_19:00:01_frequent      -  23.8M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_20:00:01_frequent      -  22.2M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_21:00:01_frequent      -  21.0M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_22:00:01_frequent      -  38.9M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-12_23:00:01_frequent      -  21.3M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_00:00:01_frequent      -  21.4M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_01:00:01_frequent      -  18.1M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_02:00:01_frequent      -  17.4M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_03:00:02_frequent      -  20.6M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_04:00:01_frequent      -  24.2M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_05:00:01_frequent      -  18.7M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_06:00:01_frequent      -  19.0M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_07:00:01_frequent      -  17.7M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_08:00:02_frequent      -   268K         -       -              -          -
rpool/data/vm-100-disk-2@rep_bakzeus_2020-10-13_08:00:01          -   268K         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_09:00:01_frequent      -  21.3M         -       -              -          -
rpool/data/vm-100-disk-2@rep_bakzeus_2020-10-13_09:00:01          -  25.2M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_10:00:01_frequent      -  6.08M         -       -              -          -
rpool/data/vm-100-disk-2@rep_bakzeus_2020-10-13_10:00:01          -  4.80M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_11:00:01_frequent      -  1004K         -       -              -          -
rpool/data/vm-100-disk-2@rep_bakzeus_2020-10-13_11:00:10          -   996K         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_12:00:01_frequent      -  1.84M         -       -              -          -
rpool/data/vm-100-disk-2@rep_bakzeus_2020-10-13_12:00:09          -  1.91M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_13:00:01_frequent      -  1.08M         -       -              -          -
rpool/data/vm-100-disk-2@rep_bakzeus_2020-10-13_13:00:09          -  1.08M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_14:00:01_frequent      -  2.29M         -       -              -          -
rpool/data/vm-100-disk-2@rep_bakzeus_2020-10-13_14:00:01          -  2.29M         -       -              -          -
rpool/data/vm-100-disk-2@pyznap_2020-10-13_15:00:02_frequent      -  1.31M         -       -              -          -
rpool/data/vm-100-disk-2@rep_bakzeus_2020-10-13_15:00:01          -  1.40M         -       -              -          -
rpool/data/vm-101-disk-1                                      1.52T  1.26G      186M   1.08G             0B         0B
rpool/data/vm-101-disk-1@pyznap_2020-10-12_10:00:01_frequent      -  4.55M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_11:00:01_frequent      -  2.20M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_12:00:02_frequent      -  2.21M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_13:00:02_frequent      -  2.19M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_14:00:01_frequent      -  2.19M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_15:00:02_frequent      -  2.24M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_16:00:01_frequent      -  2.26M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_17:00:01_frequent      -  2.32M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_18:00:02_frequent      -  2.32M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_19:00:01_frequent      -  2.29M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_20:00:01_frequent      -  2.29M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_21:00:01_frequent      -  2.29M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_22:00:01_frequent      -  2.36M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-12_23:00:01_frequent      -  3.59M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_00:00:01_frequent      -  2.50M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_01:00:01_frequent      -  2.44M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_02:00:01_frequent      -  2.25M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_03:00:02_frequent      -  2.26M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_04:00:01_frequent      -  2.20M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_05:00:01_frequent      -  2.23M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_06:00:01_frequent      -  2.25M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_07:00:01_frequent      -  2.25M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_08:00:02_frequent      -  1.52M         -       -              -          -
rpool/data/vm-101-disk-1@rep_bakolympus_2020-10-13_08:00:53       -  1.52M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_09:00:01_frequent      -  2.34M         -       -              -          -
rpool/data/vm-101-disk-1@rep_bakolympus_2020-10-13_09:08:38       -     2M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_10:00:01_frequent      -  1.94M         -       -              -          -
rpool/data/vm-101-disk-1@rep_bakolympus_2020-10-13_10:07:00       -  1.96M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_11:00:01_frequent      -     0B         -       -              -          -
rpool/data/vm-101-disk-1@rep_bakolympus_2020-10-13_11:00:01       -     0B         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_12:00:01_frequent      -     0B         -       -              -          -
rpool/data/vm-101-disk-1@rep_bakolympus_2020-10-13_12:00:01       -     0B         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_13:00:01_frequent      -     0B         -       -              -          -
rpool/data/vm-101-disk-1@rep_bakolympus_2020-10-13_13:00:01       -     0B         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_14:00:01_frequent      -  2.30M         -       -              -          -
rpool/data/vm-101-disk-1@rep_bakolympus_2020-10-13_14:04:49       -  2.05M         -       -              -          -
rpool/data/vm-101-disk-1@pyznap_2020-10-13_15:00:02_frequent      -  1.95M         -       -              -          -
rpool/swap                                                    1.52T  8.50G        0B   6.98G          1.52G         0B
 
Host 2

Code:
root@prometheus2:~# zfs list -o space -t all
NAME                                                                  AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool                                                                 4.81T   308G        0B    140K             0B       308G
rpool/ROOT                                                            4.81T  1.08G        0B    140K             0B      1.08G
rpool/ROOT/pve-1                                                      4.81T  1.08G        0B   1.08G             0B         0B
rpool/data                                                            4.81T   307G        0B    163K             0B       307G
rpool/data/nucleo                                                     4.81T   241G        0B    140K             0B       241G
rpool/data/nucleo/vm-100-disk-1                                       4.81T   111G      177M    111G             0B         0B
rpool/data/nucleo/vm-100-disk-1@rep_bakzeus_2020-10-13_09:00:01           -   115M         -       -              -          -
rpool/data/nucleo/vm-100-disk-1@rep_bakzeus_2020-10-13_10:00:01           -  5.80M         -       -              -          -
rpool/data/nucleo/vm-100-disk-1@rep_bakzeus_2020-10-13_11:00:10           -  5.85M         -       -              -          -
rpool/data/nucleo/vm-100-disk-1@rep_bakzeus_2020-10-13_12:00:09           -  5.00M         -       -              -          -
rpool/data/nucleo/vm-100-disk-1@rep_bakzeus_2020-10-13_13:00:09           -  5.87M         -       -              -          -
rpool/data/nucleo/vm-100-disk-1@rep_bakzeus_2020-10-13_14:00:01           -  5.94M         -       -              -          -
rpool/data/nucleo/vm-100-disk-1@rep_bakzeus_2020-10-13_15:00:01           -     0B         -       -              -          -
rpool/data/nucleo/vm-100-disk-2                                       4.81T   129G     1.51G    127G             0B         0B
rpool/data/nucleo/vm-100-disk-2@rep_bakzeus_2020-10-13_09:00:01           -   234M         -       -              -          -
rpool/data/nucleo/vm-100-disk-2@rep_bakzeus_2020-10-13_10:00:01           -   132M         -       -              -          -
rpool/data/nucleo/vm-100-disk-2@rep_bakzeus_2020-10-13_11:00:10           -   127M         -       -              -          -
rpool/data/nucleo/vm-100-disk-2@rep_bakzeus_2020-10-13_12:00:09           -   128M         -       -              -          -
rpool/data/nucleo/vm-100-disk-2@rep_bakzeus_2020-10-13_13:00:09           -  92.9M         -       -              -          -
rpool/data/nucleo/vm-100-disk-2@rep_bakzeus_2020-10-13_14:00:01           -  93.1M         -       -              -          -
rpool/data/nucleo/vm-100-disk-2@rep_bakzeus_2020-10-13_15:00:01           -     0B         -       -              -          -
rpool/data/nucleo/vm-101-disk-1                                       4.81T  1.58G     25.0M   1.56G             0B         0B
rpool/data/nucleo/vm-101-disk-1@rep_bakolympus_2020-10-13_09:08:38        -  3.54M         -       -              -          -
rpool/data/nucleo/vm-101-disk-1@rep_bakolympus_2020-10-13_10:07:00        -  3.03M         -       -              -          -
rpool/data/nucleo/vm-101-disk-1@rep_bakolympus_2020-10-13_11:00:01        -  3.10M         -       -              -          -
rpool/data/nucleo/vm-101-disk-1@rep_bakolympus_2020-10-13_12:00:01        -  3.59M         -       -              -          -
rpool/data/nucleo/vm-101-disk-1@rep_bakolympus_2020-10-13_13:00:01        -  3.43M         -       -              -          -
rpool/data/nucleo/vm-101-disk-1@rep_bakolympus_2020-10-13_14:04:49        -  3.50M         -       -              -          -
rpool/data/nucleo/vm-101-disk-1@rep_bakolympus_2020-10-13_15:04:32        -     0B         -       -              -          -
 
Could you also send the output of the following:

On host 1:
Code:
zpool get all rpool
zfs get all rpool/data/vm-100-disk-1
On host 2:
Code:
zfs get all rpool/data/nucleo/vm-100-disk-1
 
Thanks for the reply

Host 1

Code:
root@prometheus:~# zpool get all rpool
NAME   PROPERTY                       VALUE                          SOURCE
rpool  size                           1.81T                          -
rpool  capacity                       12%                            -
rpool  altroot                        -                              default
rpool  health                         ONLINE                         -
rpool  guid                           17841754175230960812           -
rpool  version                        -                              default
rpool  bootfs                         rpool/ROOT/pve-1               local
rpool  delegation                     on                             default
rpool  autoreplace                    off                            default
rpool  cachefile                      -                              default
rpool  failmode                       wait                           default
rpool  listsnapshots                  off                            default
rpool  autoexpand                     off                            default
rpool  dedupditto                     0                              default
rpool  dedupratio                     1.00x                          -
rpool  free                           1.58T                          -
rpool  allocated                      240G                           -
rpool  readonly                       off                            -
rpool  ashift                         12                             local
rpool  comment                        -                              default
rpool  expandsize                     -                              -
rpool  freeing                        0                              -
rpool  fragmentation                  28%                            -
rpool  leaked                         0                              -
rpool  multihost                      off                            default
rpool  feature@async_destroy          enabled                        local
rpool  feature@empty_bpobj            active                         local
rpool  feature@lz4_compress           active                         local
rpool  feature@multi_vdev_crash_dump  enabled                        local
rpool  feature@spacemap_histogram     active                         local
rpool  feature@enabled_txg            active                         local
rpool  feature@hole_birth             active                         local
rpool  feature@extensible_dataset     active                         local
rpool  feature@embedded_data          active                         local
rpool  feature@bookmarks              enabled                        local
rpool  feature@filesystem_limits      enabled                        local
rpool  feature@large_blocks           enabled                        local
rpool  feature@large_dnode            enabled                        local
rpool  feature@sha512                 enabled                        local
rpool  feature@skein                  enabled                        local
rpool  feature@edonr                  enabled                        local
rpool  feature@userobj_accounting     active                         local

Code:
root@prometheus:~# zfs get all rpool/data/vm-100-disk-1
NAME                      PROPERTY              VALUE                  SOURCE
rpool/data/vm-100-disk-1  type                  volume                 -
rpool/data/vm-100-disk-1  creation              Sat Sep 26 16:13 2020  -
rpool/data/vm-100-disk-1  used                  77.1G                  -
rpool/data/vm-100-disk-1  available             1.52T                  -
rpool/data/vm-100-disk-1  referenced            76.7G                  -
rpool/data/vm-100-disk-1  compressratio         1.06x                  -
rpool/data/vm-100-disk-1  reservation           none                   default
rpool/data/vm-100-disk-1  volsize               100G                   local
rpool/data/vm-100-disk-1  volblocksize          8K                     default
rpool/data/vm-100-disk-1  checksum              on                     default
rpool/data/vm-100-disk-1  compression           on                     inherited from rpool
rpool/data/vm-100-disk-1  readonly              off                    default
rpool/data/vm-100-disk-1  createtxg             521173512              -
rpool/data/vm-100-disk-1  copies                1                      default
rpool/data/vm-100-disk-1  refreservation        none                   default
rpool/data/vm-100-disk-1  guid                  14429525371562425641   -
rpool/data/vm-100-disk-1  primarycache          all                    default
rpool/data/vm-100-disk-1  secondarycache        all                    default
rpool/data/vm-100-disk-1  usedbysnapshots       386M                   -
rpool/data/vm-100-disk-1  usedbydataset         76.7G                  -
rpool/data/vm-100-disk-1  usedbychildren        0B                     -
rpool/data/vm-100-disk-1  usedbyrefreservation  0B                     -
rpool/data/vm-100-disk-1  logbias               latency                default
rpool/data/vm-100-disk-1  dedup                 off                    default
rpool/data/vm-100-disk-1  mlslabel              none                   default
rpool/data/vm-100-disk-1  sync                  disabled               inherited from rpool
rpool/data/vm-100-disk-1  refcompressratio      1.05x                  -
rpool/data/vm-100-disk-1  written               10.8M                  -
rpool/data/vm-100-disk-1  logicalused           81.2G                  -
rpool/data/vm-100-disk-1  logicalreferenced     80.8G                  -
rpool/data/vm-100-disk-1  volmode               default                default
rpool/data/vm-100-disk-1  snapshot_limit        none                   default
rpool/data/vm-100-disk-1  snapshot_count        none                   default
rpool/data/vm-100-disk-1  snapdev               hidden                 default
rpool/data/vm-100-disk-1  context               none                   default
rpool/data/vm-100-disk-1  fscontext             none                   default
rpool/data/vm-100-disk-1  defcontext            none                   default
rpool/data/vm-100-disk-1  rootcontext           none                   default
rpool/data/vm-100-disk-1  redundant_metadata    all                    default
 
Host 2

Code:
root@prometheus2:~# zfs get all rpool/data/nucleo/vm-100-disk-1
NAME                             PROPERTY              VALUE                  SOURCE
rpool/data/nucleo/vm-100-disk-1  type                  volume                 -
rpool/data/nucleo/vm-100-disk-1  creation              Tue Oct  6 22:48 2020  -
rpool/data/nucleo/vm-100-disk-1  used                  111G                   -
rpool/data/nucleo/vm-100-disk-1  available             4.81T                  -
rpool/data/nucleo/vm-100-disk-1  referenced            111G                   -
rpool/data/nucleo/vm-100-disk-1  compressratio         1.05x                  -
rpool/data/nucleo/vm-100-disk-1  reservation           none                   default
rpool/data/nucleo/vm-100-disk-1  volsize               100G                   local
rpool/data/nucleo/vm-100-disk-1  volblocksize          8K                     default
rpool/data/nucleo/vm-100-disk-1  checksum              on                     default
rpool/data/nucleo/vm-100-disk-1  compression           on                     inherited from rpool
rpool/data/nucleo/vm-100-disk-1  readonly              off                    default
rpool/data/nucleo/vm-100-disk-1  createtxg             73079                  -
rpool/data/nucleo/vm-100-disk-1  copies                1                      default
rpool/data/nucleo/vm-100-disk-1  refreservation        none                   default
rpool/data/nucleo/vm-100-disk-1  guid                  5368229525728128062    -
rpool/data/nucleo/vm-100-disk-1  primarycache          all                    default
rpool/data/nucleo/vm-100-disk-1  secondarycache        all                    default
rpool/data/nucleo/vm-100-disk-1  usedbysnapshots       224M                   -
rpool/data/nucleo/vm-100-disk-1  usedbydataset         111G                   -
rpool/data/nucleo/vm-100-disk-1  usedbychildren        0B                     -
rpool/data/nucleo/vm-100-disk-1  usedbyrefreservation  0B                     -
rpool/data/nucleo/vm-100-disk-1  logbias               latency                default
rpool/data/nucleo/vm-100-disk-1  objsetid              4864                   -
rpool/data/nucleo/vm-100-disk-1  dedup                 off                    default
rpool/data/nucleo/vm-100-disk-1  mlslabel              none                   default
rpool/data/nucleo/vm-100-disk-1  sync                  standard               inherited from rpool
rpool/data/nucleo/vm-100-disk-1  refcompressratio      1.05x                  -
rpool/data/nucleo/vm-100-disk-1  written               0                      -
rpool/data/nucleo/vm-100-disk-1  logicalused           81.0G                  -
rpool/data/nucleo/vm-100-disk-1  logicalreferenced     80.8G                  -
rpool/data/nucleo/vm-100-disk-1  volmode               default                default
rpool/data/nucleo/vm-100-disk-1  snapshot_limit        none                   default
rpool/data/nucleo/vm-100-disk-1  snapshot_count        none                   default
rpool/data/nucleo/vm-100-disk-1  snapdev               hidden                 default
rpool/data/nucleo/vm-100-disk-1  context               none                   default
rpool/data/nucleo/vm-100-disk-1  fscontext             none                   default
rpool/data/nucleo/vm-100-disk-1  defcontext            none                   default
rpool/data/nucleo/vm-100-disk-1  rootcontext           none                   default
rpool/data/nucleo/vm-100-disk-1  redundant_metadata    all                    default
rpool/data/nucleo/vm-100-disk-1  encryption            off                    default
rpool/data/nucleo/vm-100-disk-1  keylocation           none                   default
rpool/data/nucleo/vm-100-disk-1  keyformat             none                   default
rpool/data/nucleo/vm-100-disk-1  pbkdf2iters           0                      default
 
The logicalused is basically the same for both datasets (as it should be). The volume properties are also basically the same. But I don't see a real reason why there should be such a difference in how the data is organized in the two pools.

Running a diff between the zpool properties of both pools shows:
Code:
< rpool  size                           1.81T                          -
< rpool  capacity                       12%                            -
---
> rpool  size                           7.27T                          -
> rpool  capacity                       5%                             -
6c8
< rpool  guid                           17841754175230960812           -
---
> rpool  guid                           7662771843657390416            -
17,18c19,20
< rpool  free                           1.58T                          -
< rpool  allocated                      240G                           -
---
> rpool  free                           6.85T                          -
> rpool  allocated                      422G                           -
24c26
< rpool  fragmentation                  28%                            -
---
> rpool  fragmentation                  0%                             -
26a29,31
> rpool  checkpoint                     -                              -
> rpool  load_guid                      14600233907018335643           -
> rpool  autotrim                       off                            default
43a49,59
> rpool  feature@encryption             enabled                        local
> rpool  feature@project_quota          active                         local
> rpool  feature@device_removal         enabled                        local
> rpool  feature@obsolete_counts        enabled                        local
> rpool  feature@zpool_checkpoint       enabled                        local
> rpool  feature@spacemap_v2            active                         local
> rpool  feature@allocation_classes     enabled                        local
> rpool  feature@resilver_defer         enabled                        local
> rpool  feature@bookmark_v2            enabled                        local

Some wild guesses:
  • The pool on host1 is older, e.g. no autotrim feature was present back then. Maybe being created by different zfs versions has an effect?
  • The pool on host2 is much larger.
  • The pool on host2 has the encryption feature turned on. But as the dataset doesn't use encryption, that would feel like a weird reason.
What you could still test is to send the dataset completely (that is all snapshots up to <some_recent_snapshot>) via
Code:
zfs send -Rp rpool/data/vm-100-disk-1@<some_recent_snapshot> | ssh <IP host 2> zfs recv rpool/<some_name>
and if there is different usage even then.
 
  • Like
Reactions: killmasta93
different pool structure (e.g. mirror vs raidz) can also cause this
 
  • Like
Reactions: killmasta93
oh gotcha currently host 1 is with Raid 10 and host 2 is with Raid z1 could be that issue buts its odd
also that host 1 has proxmox 5 and host 2 has proxmox 6
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!