pve-zsync - very slow CT backups compared to VM

fmo

Active Member
Jul 24, 2019
5
0
41
Hi,

I have this issue with terrible slow pve-zsync containers backups (compared to VMs backups). Both servers (source and destination) are enterprise graded with the latest version of Proxmox VE 5.4 installed.

I performed some pve-zsync backup tests with a clean/default instalation of Ubuntu 18.04 CT and Windows 10 VM. It took almost 36 minutes to backup the CT (~ 630 MB) and 10 minutes to backup the VM (~ 9.6 GB).

Any ideas?

Thanks.

Code:
root@SERVER1:~# pve-zsync sync -source 183 -dest SERVER2:hdd-pool/backup --verbose --maxsnap 10
full send of nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46 estimated size is 617M
total estimated size is 617M
TIME        SENT   SNAPSHOT
01:43:48   19.1M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
01:43:49   19.1M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
01:43:50   19.1M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
01:43:51   19.9M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
01:43:52   20.4M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
01:43:53   20.4M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
01:43:54   21.3M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
01:43:55   21.8M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
01:43:56   22.0M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46

[...]

02:19:37    632M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
02:19:38    632M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
02:19:39    633M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
02:19:40    633M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
02:19:41    633M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
02:19:42    634M   nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46
Code:
root@SERVER1:~# pve-zsync sync -source 184 -dest SERVER2:hdd-pool/backup --verbose --maxsnap 10
full send of nvme-pool/vm-184-disk-0@rep_default_2019-10-16_02:39:56 estimated size is 9.59G
total estimated size is 9.59G
TIME        SENT   SNAPSHOT
02:39:58   19.7M   nvme-pool/vm-184-disk-0@rep_default_2019-10-16_02:39:56
02:39:59   25.2M   nvme-pool/vm-184-disk-0@rep_default_2019-10-16_02:39:56
02:40:00   29.9M   nvme-pool/vm-184-disk-0@rep_default_2019-10-16_02:39:56

[...]

02:49:38   9.65G   nvme-pool/vm-184-disk-0@rep_default_2019-10-16_02:39:56
02:49:39   9.67G   nvme-pool/vm-184-disk-0@rep_default_2019-10-16_02:39:56
02:49:40   9.68G   nvme-pool/vm-184-disk-0@rep_default_2019-10-16_02:39:56
Code:
### SERVER 1 ###
root@SERVER1:~# pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-21-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-9
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-10-pve: 4.15.18-32
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-55
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-40
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
pve-zsync: 1.7-4
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2

### SERVER 2 ###

root@SERVER2:~# pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-21-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-9
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-13-pve: 4.15.18-37
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-10-pve: 4.15.18-32
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-55
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-40
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
pve-zsync: 1.7-4
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
Code:
### SERVER 1 ###

root@SERVER1:~# zpool status
  pool: nvme-pool
state: ONLINE
  scan: scrub repaired 0B in 0h40m with 0 errors on Sun Oct 13 01:04:44 2019
config:

        NAME                                                STATE     READ WRITE CKSUM
        nvme-pool                                           ONLINE       0     0     0
          mirror-0                                          ONLINE       0     0     0
            nvme-SAMSUNG_MZWLL1T6HEHP-00003_XXXXXXXXXX0212  ONLINE       0     0     0
            nvme-SAMSUNG_MZWLL1T6HEHP-00003_XXXXXXXXXX0207  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
state: ONLINE
  scan: scrub repaired 0B in 0h8m with 0 errors on Sun Oct 13 00:32:14 2019
config:

        NAME                              STATE     READ WRITE CKSUM
        rpool                             ONLINE       0     0     0
          mirror-0                        ONLINE       0     0     0
            wwn-0xxxxxxxxxxxxxx32d-part3  ONLINE       0     0     0
            wwn-0xxxxxxxxxxxxxx2c8-part3  ONLINE       0     0     0

errors: No known data errors

### SERVER 2 ###

root@SERVER2:~# zpool status
  pool: hdd-pool
state: ONLINE
  scan: scrub repaired 0B in 1h9m with 0 errors on Sun Oct 13 01:33:28 2019
config:

        NAME                                             STATE     READ WRITE CKSUM
        hdd-pool                                         ONLINE       0     0     0
          mirror-0                                       ONLINE       0     0     0
            scsi-xxxxxxxxxxxxx1544                       ONLINE       0     0     0
            scsi-xxxxxxxxxxxxxb888                       ONLINE       0     0     0
          mirror-1                                       ONLINE       0     0     0
            scsi-xxxxxxxxxxxxx3218                       ONLINE       0     0     0
            scsi-xxxxxxxxxxxxx59b8                       ONLINE       0     0     0
          mirror-2                                       ONLINE       0     0     0
            scsi-xxxxxxxxxxxxxdc5c                       ONLINE       0     0     0
            scsi-xxxxxxxxxxxxxebdc                       ONLINE       0     0     0
          mirror-3                                       ONLINE       0     0     0
            scsi-xxxxxxxxxxxxxb5a4                       ONLINE       0     0     0
            scsi-xxxxxxxxxxxxx7e30                       ONLINE       0     0     0
        logs
          mirror-4                                       ONLINE       0     0     0
            nvme-INTEL_SSDPE21K100GA_XXXXXXXXXXXF100EGN  ONLINE       0     0     0
            nvme-INTEL_SSDPE21K100GA_XXXXXXXXXXXL100EGN  ONLINE       0     0     0

errors: No known data errors

  pool: nvme-pool
state: ONLINE
  scan: scrub repaired 0B in 0h2m with 0 errors on Sun Oct 13 00:26:23 2019
config:

        NAME                                                STATE     READ WRITE CKSUM
        nvme-pool                                           ONLINE       0     0     0
          mirror-0                                          ONLINE       0     0     0
            nvme-SAMSUNG_MZWLL1T6HEHP-00003_XXXXXXXXXXX311  ONLINE       0     0     0
            nvme-SAMSUNG_MZWLL1T6HEHP-00003_XXXXXXXXXXX198  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
state: ONLINE
  scan: scrub repaired 0B in 0h0m with 0 errors on Sun Oct 13 00:24:41 2019
config:

        NAME                              STATE     READ WRITE CKSUM
        rpool                             ONLINE       0     0     0
          mirror-0                        ONLINE       0     0     0
            wwn-0xxxxxxxxxxxxxxf62-part3  ONLINE       0     0     0
            wwn-0xxxxxxxxxxxxxxf5e-part3  ONLINE       0     0     0

errors: No known data errors
Code:
### SERVER 1 ###

root@SERVER1:~#  zfs get all nvme-pool | grep local
nvme-pool  compression           lz4                    local
nvme-pool  atime                 off                    local
nvme-pool  xattr                 sa                     local
nvme-pool  dnodesize             auto                   local

### SERVER 2 ###

root@SERVER2:~# zfs get all hdd-pool | grep local
hdd-pool  compression           lz4                    local
hdd-pool  atime                 off                    local
hdd-pool  xattr                 sa                     local
hdd-pool  dnodesize             auto                   local
 
Last edited:
Hi,

this is strange.
Please send all the parameters of the involved datasets.
Code:
zfs get all nvme-pool/vm-184-disk-0
zfs get all nvme-pool/subvol-183-disk-0
zfs list -t snapshot nvme-pool/subvol-183-disk-0
zfs list -t snapshot nvme-pool/vm-184-disk-0
 
Hi Wolfgang,

There was no activity on tested CT and VM during the test (were turned off before pve-zsync transfer).

Code:
root@SERVER1:~# zfs get all nvme-pool/vm-184-disk-0
NAME                     PROPERTY              VALUE                  SOURCE
nvme-pool/vm-184-disk-0  type                  volume                 -
nvme-pool/vm-184-disk-0  creation              Wed Oct 16  2:29 2019  -
nvme-pool/vm-184-disk-0  used                  8.04G                  -
nvme-pool/vm-184-disk-0  available             522G                   -
nvme-pool/vm-184-disk-0  referenced            8.04G                  -
nvme-pool/vm-184-disk-0  compressratio         1.17x                  -
nvme-pool/vm-184-disk-0  reservation           none                   default
nvme-pool/vm-184-disk-0  volsize               32G                    local
nvme-pool/vm-184-disk-0  volblocksize          8K                     default
nvme-pool/vm-184-disk-0  checksum              on                     default
nvme-pool/vm-184-disk-0  compression           lz4                    inherited from nvme-pool
nvme-pool/vm-184-disk-0  readonly              off                    default
nvme-pool/vm-184-disk-0  createtxg             3428459                -
nvme-pool/vm-184-disk-0  copies                1                      default
nvme-pool/vm-184-disk-0  refreservation        none                   default
nvme-pool/vm-184-disk-0  guid                  8875766788435763713    -
nvme-pool/vm-184-disk-0  primarycache          all                    default
nvme-pool/vm-184-disk-0  secondarycache        all                    default
nvme-pool/vm-184-disk-0  usedbysnapshots       0B                     -
nvme-pool/vm-184-disk-0  usedbydataset         8.04G                  -
nvme-pool/vm-184-disk-0  usedbychildren        0B                     -
nvme-pool/vm-184-disk-0  usedbyrefreservation  0B                     -
nvme-pool/vm-184-disk-0  logbias               latency                default
nvme-pool/vm-184-disk-0  dedup                 off                    default
nvme-pool/vm-184-disk-0  mlslabel              none                   default
nvme-pool/vm-184-disk-0  sync                  standard               default
nvme-pool/vm-184-disk-0  refcompressratio      1.17x                  -
nvme-pool/vm-184-disk-0  written               0                      -
nvme-pool/vm-184-disk-0  logicalused           9.38G                  -
nvme-pool/vm-184-disk-0  logicalreferenced     9.38G                  -
nvme-pool/vm-184-disk-0  volmode               default                default
nvme-pool/vm-184-disk-0  snapshot_limit        none                   default
nvme-pool/vm-184-disk-0  snapshot_count        none                   default
nvme-pool/vm-184-disk-0  snapdev               hidden                 default
nvme-pool/vm-184-disk-0  context               none                   default
nvme-pool/vm-184-disk-0  fscontext             none                   default
nvme-pool/vm-184-disk-0  defcontext            none                   default
nvme-pool/vm-184-disk-0  rootcontext           none                   default
nvme-pool/vm-184-disk-0  redundant_metadata    all                    default
Code:
root@SERVER1:~# zfs get all nvme-pool/subvol-183-disk-0
NAME                         PROPERTY              VALUE                         SOURCE
nvme-pool/subvol-183-disk-0  type                  filesystem                    -
nvme-pool/subvol-183-disk-0  creation              Wed Oct 16  0:36 2019         -
nvme-pool/subvol-183-disk-0  used                  362M                          -
nvme-pool/subvol-183-disk-0  available             9.65G                         -
nvme-pool/subvol-183-disk-0  referenced            362M                          -
nvme-pool/subvol-183-disk-0  compressratio         1.88x                         -
nvme-pool/subvol-183-disk-0  mounted               yes                           -
nvme-pool/subvol-183-disk-0  quota                 none                          default
nvme-pool/subvol-183-disk-0  reservation           none                          default
nvme-pool/subvol-183-disk-0  recordsize            128K                          default
nvme-pool/subvol-183-disk-0  mountpoint            /nvme-pool/subvol-183-disk-0  default
nvme-pool/subvol-183-disk-0  sharenfs              off                           default
nvme-pool/subvol-183-disk-0  checksum              on                            default
nvme-pool/subvol-183-disk-0  compression           lz4                           inherited from nvme-pool
nvme-pool/subvol-183-disk-0  atime                 off                           inherited from nvme-pool
nvme-pool/subvol-183-disk-0  devices               on                            default
nvme-pool/subvol-183-disk-0  exec                  on                            default
nvme-pool/subvol-183-disk-0  setuid                on                            default
nvme-pool/subvol-183-disk-0  readonly              off                           default
nvme-pool/subvol-183-disk-0  zoned                 off                           default
nvme-pool/subvol-183-disk-0  snapdir               hidden                        default
nvme-pool/subvol-183-disk-0  aclinherit            restricted                    default
nvme-pool/subvol-183-disk-0  createtxg             3426555                       -
nvme-pool/subvol-183-disk-0  canmount              on                            default
nvme-pool/subvol-183-disk-0  xattr                 sa                            local
nvme-pool/subvol-183-disk-0  copies                1                             default
nvme-pool/subvol-183-disk-0  version               5                             -
nvme-pool/subvol-183-disk-0  utf8only              off                           -
nvme-pool/subvol-183-disk-0  normalization         none                          -
nvme-pool/subvol-183-disk-0  casesensitivity       sensitive                     -
nvme-pool/subvol-183-disk-0  vscan                 off                           default
nvme-pool/subvol-183-disk-0  nbmand                off                           default
nvme-pool/subvol-183-disk-0  sharesmb              off                           default
nvme-pool/subvol-183-disk-0  refquota              10G                           local
nvme-pool/subvol-183-disk-0  refreservation        none                          default
nvme-pool/subvol-183-disk-0  guid                  808535489595528691            -
nvme-pool/subvol-183-disk-0  primarycache          all                           default
nvme-pool/subvol-183-disk-0  secondarycache        all                           default
nvme-pool/subvol-183-disk-0  usedbysnapshots       0B                            -
nvme-pool/subvol-183-disk-0  usedbydataset         362M                          -
nvme-pool/subvol-183-disk-0  usedbychildren        0B                            -
nvme-pool/subvol-183-disk-0  usedbyrefreservation  0B                            -
nvme-pool/subvol-183-disk-0  logbias               latency                       default
nvme-pool/subvol-183-disk-0  dedup                 off                           default
nvme-pool/subvol-183-disk-0  mlslabel              none                          default
nvme-pool/subvol-183-disk-0  sync                  standard                      default
nvme-pool/subvol-183-disk-0  dnodesize             auto                          inherited from nvme-pool
nvme-pool/subvol-183-disk-0  refcompressratio      1.88x                         -
nvme-pool/subvol-183-disk-0  written               0                             -
nvme-pool/subvol-183-disk-0  logicalused           616M                          -
nvme-pool/subvol-183-disk-0  logicalreferenced     616M                          -
nvme-pool/subvol-183-disk-0  volmode               default                       default
nvme-pool/subvol-183-disk-0  filesystem_limit      none                          default
nvme-pool/subvol-183-disk-0  snapshot_limit        none                          default
nvme-pool/subvol-183-disk-0  filesystem_count      none                          default
nvme-pool/subvol-183-disk-0  snapshot_count        none                          default
nvme-pool/subvol-183-disk-0  snapdev               hidden                        default
nvme-pool/subvol-183-disk-0  acltype               posixacl                      local
nvme-pool/subvol-183-disk-0  context               none                          default
nvme-pool/subvol-183-disk-0  fscontext             none                          default
nvme-pool/subvol-183-disk-0  defcontext            none                          default
nvme-pool/subvol-183-disk-0  rootcontext           none                          default
nvme-pool/subvol-183-disk-0  relatime              off                           default
nvme-pool/subvol-183-disk-0  redundant_metadata    all                           default
nvme-pool/subvol-183-disk-0  overlay               off                           default
Code:
root@SERVER1:~# zfs list -t snapshot | grep nvme-pool/subvol-183
nvme-pool/subvol-183-disk-0@rep_default_2019-10-16_01:43:46      0B      -   362M  -
root@SERVER1:~# zfs list -t snapshot | grep nvme-pool/vm-184
nvme-pool/vm-184-disk-0@rep_default_2019-10-16_02:39:56          0B      -  8.04G  -

Thank you.
 
Do you use the ACL in the container?
 
No, I don't use ACLs in the container.

I have this problem with all my containers, that's why I made this test with a fresh Ubuntu CT template downloaded from proxmox repository, to exclude any special configuration I may have in the container. I haven't even started (or logged in to) the test container after creation.

Code:
root@SERVER1:~# pct config 183
arch: amd64
cores: 8
hostname: test.domain.tld
memory: 4096
net0: name=eth0,bridge=vmbr2,gw=XX.XX.XX.65,hwaddr=XX:XX:XX:XX:XX:D2,ip=XX.XX.XX.83/27,type=veth
ostype: ubuntu
rootfs: local-zfs-nvme:subvol-183-disk-0,size=10G
swap: 512
unprivileged: 1
 
Tanks for the config.
I will test it tomorrow here on our setup.
 
On destination node I noticed different ZFS behaviour regarding disk writes during CT backups compared with VM backups. zpool iostat shows continuous writes and high IOPS with lower bandwidth during CT backups and sporadic writes with higher bandwidth during VM backups.

Is this expected behaviour ?

Thanks.

Code:
--------------------------------------------------  -----  -----  -----  -----  -----  -----
                                                      capacity     operations     bandwidth
pool                                                alloc   free   read  write   read  write
--------------------------------------------------  -----  -----  -----  -----  -----  -----
hdd-pool                                            2.02M  21.7T      0  1.66K      0  14.9M
  mirror                                             364K  5.44T      0    505      0  4.49M
    scsi-xxxxxxxxxxxxx1544                              -      -      0    253      0  2.25M
    scsi-xxxxxxxxxxxxxb888                              -      -      0    251      0  2.25M
  mirror                                             316K  5.44T      0    283      0  2.28M
    scsi-xxxxxxxxxxxxx3218                              -      -      0    140      0  1.14M
    scsi-xxxxxxxxxxxxx59b8                              -      -      0    142      0  1.14M
  mirror                                             708K  5.44T      0    317      0  2.58M
    scsi-xxxxxxxxxxxxxdc5c                              -      -      0    159      0  1.29M
    scsi-xxxxxxxxxxxxxebdc                              -      -      0    157      0  1.29M
  mirror                                             680K  5.44T      0    594      0  5.52M
    scsi-xxxxxxxxxxxxxb5a4                              -      -      0    294      0  2.76M
    scsi-xxxxxxxxxxxxx7e30                              -      -      0    300      0  2.76M
logs                                                    -      -      -      -      -      -
  mirror                                              12K  93.0G      0      0      0      0
    nvme-INTEL_SSDPE21K100GA_XXXXXXXXXXXF100EGN         -      -      0      0      0      0
    nvme-INTEL_SSDPE21K100GA_XXXXXXXXXXXL100EGN         -      -      0      0      0      0
--------------------------------------------------  -----  -----  -----  -----  -----  -----
                                                      capacity     operations     bandwidth
pool                                                alloc   free   read  write   read  write
--------------------------------------------------  -----  -----  -----  -----  -----  -----
hdd-pool                                            2.59M  21.7T      0  1.73K      0  15.6M
  mirror                                             960K  5.44T      0    562      0  4.88M
    scsi-xxxxxxxxxxxxx1544                              -      -      0    276      0  2.47M
    scsi-xxxxxxxxxxxxxb888                              -      -      0    286      0  2.41M
  mirror                                             352K  5.44T      0    447      0  3.59M
    scsi-xxxxxxxxxxxxx3218                              -      -      0    225      0  1.79M
    scsi-xxxxxxxxxxxxx59b8                              -      -      0    221      0  1.79M
  mirror                                             804K  5.44T      0    473      0  4.17M
    scsi-xxxxxxxxxxxxxdc5c                              -      -      0    233      0  2.09M
    scsi-xxxxxxxxxxxxxebdc                              -      -      0    239      0  2.08M
  mirror                                             532K  5.44T      0    293      0  2.99M
    scsi-xxxxxxxxxxxxxb5a4                              -      -      0    148      0  1.49M
    scsi-xxxxxxxxxxxxx7e30                              -      -      0    144      0  1.49M
logs                                                    -      -      -      -      -      -
  mirror                                              12K  93.0G      0      0      0      0
    nvme-INTEL_SSDPE21K100GA_XXXXXXXXXXXF100EGN         -      -      0      0      0      0
    nvme-INTEL_SSDPE21K100GA_XXXXXXXXXXXL100EGN         -      -      0      0      0      0
--------------------------------------------------  -----  -----  -----  -----  -----  -----
Code:
--------------------------------------------------  -----  -----  -----  -----  -----  -----
                                                      capacity     operations     bandwidth
pool                                                alloc   free   read  write   read  write
--------------------------------------------------  -----  -----  -----  -----  -----  -----
hdd-pool                                            83.7M  21.7T      0  1.28K      0  95.0M
  mirror                                            20.7M  5.44T      0    336      0  25.0M
    scsi-xxxxxxxxxxxxx1544                              -      -      0    163      0  12.5M
    scsi-xxxxxxxxxxxxxb888                              -      -      0    172      0  12.5M
  mirror                                            22.3M  5.44T      0    385      0  26.6M
    scsi-xxxxxxxxxxxxx3218                              -      -      0    197      0  13.3M
    scsi-xxxxxxxxxxxxx59b8                              -      -      0    187      0  13.3M
  mirror                                            23.0M  5.44T      0    304      0  26.1M
    scsi-xxxxxxxxxxxxxdc5c                              -      -      0    138      0  13.1M
    scsi-xxxxxxxxxxxxxebdc                              -      -      0    165      0  13.1M
  mirror                                            17.6M  5.44T      0    281      0  17.3M
    scsi-xxxxxxxxxxxxxb5a4                              -      -      0    145      0  8.63M
    scsi-xxxxxxxxxxxxx7e30                              -      -      0    135      0  8.63M
logs                                                    -      -      -      -      -      -
  mirror                                              12K  93.0G      0      0      0      0
    nvme-INTEL_SSDPE21K100GA_XXXXXXXXXXXF100EGN         -      -      0      0      0      0
    nvme-INTEL_SSDPE21K100GA_XXXXXXXXXXXL100EGN         -      -      0      0      0      0
--------------------------------------------------  -----  -----  -----  -----  -----  -----
                                                      capacity     operations     bandwidth
pool                                                alloc   free   read  write   read  write
--------------------------------------------------  -----  -----  -----  -----  -----  -----
hdd-pool                                            83.7M  21.7T      0      0      0      0
  mirror                                            20.7M  5.44T      0      0      0      0
    scsi-xxxxxxxxxxxxx1544                              -      -      0      0      0      0
    scsi-xxxxxxxxxxxxxb888                              -      -      0      0      0      0
  mirror                                            22.3M  5.44T      0      0      0      0
    scsi-xxxxxxxxxxxxx3218                              -      -      0      0      0      0
    scsi-xxxxxxxxxxxxx59b8                              -      -      0      0      0      0
  mirror                                            23.0M  5.44T      0      0      0      0
    scsi-xxxxxxxxxxxxxdc5c                              -      -      0      0      0      0
    scsi-xxxxxxxxxxxxxebdc                              -      -      0      0      0      0
  mirror                                            17.6M  5.44T      0      0      0      0
    scsi-xxxxxxxxxxxxxb5a4                              -      -      0      0      0      0
    scsi-xxxxxxxxxxxxx7e30                              -      -      0      0      0      0
logs                                                    -      -      -      -      -      -
  mirror                                              12K  93.0G      0      0      0      0
    nvme-INTEL_SSDPE21K100GA_XXXXXXXXXXXF100EGN         -      -      0      0      0      0
    nvme-INTEL_SSDPE21K100GA_XXXXXXXXXXXL100EGN         -      -      0      0      0      0
--------------------------------------------------  -----  -----  -----  -----  -----  -----
 
Yes CT and KVM Backup are complet different.
CT Backup uses the storage feature.
KVM has an internal backup routine.
 
On a freshly installed system, I did not see this behavior.
Can you check the destination node how busy the zfs disk is?
You could use "atop" for it.
 
The zpool is not busy, there is no VM or CT running on destination node.

Code:
root@SERVER2:~# atop -PDSK 10
RESET
DSK SERVER2 1571650257 2019/10/21 12:30:57 732658 sda 10108524 2864554 159122528 3881407 130176320
DSK SERVER2 1571650257 2019/10/21 12:30:57 732658 sdb 9832996 2912469 159447096 3891771 130176320
DSK SERVER2 1571650257 2019/10/21 12:30:57 732658 sdc 9980040 3045897 167283256 4017562 134825216
DSK SERVER2 1571650257 2019/10/21 12:30:57 732658 sdd 9851864 3036188 167364760 4021939 134825216
DSK SERVER2 1571650257 2019/10/21 12:30:57 732658 sde 10551256 2689408 145291384 4609830 144828696
DSK SERVER2 1571650257 2019/10/21 12:30:57 732658 sdf 10484004 2654786 145273976 4620663 144828696
DSK SERVER2 1571650257 2019/10/21 12:30:57 732658 sdg 8252136 2925403 160968752 3600468 126292584
DSK SERVER2 1571650257 2019/10/21 12:30:57 732658 sdh 8293500 2977515 161277512 3603141 126292584
DSK SERVER2 1571650257 2019/10/21 12:30:57 732658 sdi 627964 311805 1482029 10608540 304647040
DSK SERVER2 1571650257 2019/10/21 12:30:57 732658 sdj 631952 311571 1472765 10491938 304647040
SEP
DSK SERVER2 1571650267 2019/10/21 12:31:07 10 sda 0 0 0 0 0
DSK SERVER2 1571650267 2019/10/21 12:31:07 10 sdb 0 0 0 0 0
DSK SERVER2 1571650267 2019/10/21 12:31:07 10 sdc 0 0 0 0 0
DSK SERVER2 1571650267 2019/10/21 12:31:07 10 sdd 0 0 0 0 0
DSK SERVER2 1571650267 2019/10/21 12:31:07 10 sde 0 0 0 0 0
DSK SERVER2 1571650267 2019/10/21 12:31:07 10 sdf 0 0 0 0 0
DSK SERVER2 1571650267 2019/10/21 12:31:07 10 sdg 0 0 0 0 0
DSK SERVER2 1571650267 2019/10/21 12:31:07 10 sdh 0 0 0 0 0
DSK SERVER2 1571650267 2019/10/21 12:31:07 10 sdi 8 4 0 143 3024
DSK SERVER2 1571650267 2019/10/21 12:31:07 10 sdj 8 4 0 140 3024
SEP
DSK SERVER2 1571650277 2019/10/21 12:31:17 10 sda 0 0 0 0 0
DSK SERVER2 1571650277 2019/10/21 12:31:17 10 sdb 0 0 0 0 0
DSK SERVER2 1571650277 2019/10/21 12:31:17 10 sdc 0 0 0 0 0
DSK SERVER2 1571650277 2019/10/21 12:31:17 10 sdd 0 0 0 0 0
DSK SERVER2 1571650277 2019/10/21 12:31:17 10 sde 0 0 0 0 0
DSK SERVER2 1571650277 2019/10/21 12:31:17 10 sdf 0 0 0 0 0
DSK SERVER2 1571650277 2019/10/21 12:31:17 10 sdg 0 0 0 0 0
DSK SERVER2 1571650277 2019/10/21 12:31:17 10 sdh 0 0 0 0 0
DSK SERVER2 1571650277 2019/10/21 12:31:17 10 sdi 8 4 0 139 2560
DSK SERVER2 1571650277 2019/10/21 12:31:17 10 sdj 8 4 0 138 2560
SEP
2019-10-21_124321.jpg
2019-10-21_124748.jpg
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!