[SOLVED] Backup is very slow, under 1MB/s, only one VM.

fireon

Distinguished Member
Oct 25, 2010
4,135
390
153
42
Austria/Graz
iteas.at
Since a time we using PBS. Network between PVE and PBS is Gigabit with an LACP Bond. All VM's can backup fast. But only one (and yes, of course this is the biggest with almost 7TB) is since about 3 Weeks very slow. The speed is under 1MB/s

Code:
NFO: starting new backup job: vzdump 118 --mode snapshot --mailto admin@tux.lan --storage sicherung-data-vm --node virtu01 --quiet 1 --mailnotification failure
INFO: Starting Backup of VM 118 (qemu)
INFO: Backup started at 2021-03-13 08:30:02
INFO: status = running
INFO: VM Name: data.tux.lan
INFO: include disk 'scsi0' 'SSD-vmdata:vm-118-disk-1' 30G
INFO: include disk 'scsi1' 'SSD-vmdata:vm-118-disk-2' 8G
INFO: include disk 'scsi2' 'HDD-vmdata:vm-118-disk-3' 7000G
INFO: include disk 'efidisk0' 'SSD-vmdata:vm-118-disk-0' 128K
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/118/2021-03-13T07:30:02Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: enabling encryption
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '6cc05a40-6a05-480b-8a8e-8ac6ac560939'
INFO: resuming VM again
INFO: efidisk0: dirty-bitmap status: created new
INFO: scsi0: dirty-bitmap status: created new
INFO: scsi1: dirty-bitmap status: created new
INFO: scsi2: dirty-bitmap status: created new
INFO:   0% (492.0 MiB of 6.9 TiB) in 3s, read: 164.0 MiB/s, write: 5.3 MiB/s
INFO:   1% (70.4 GiB of 6.9 TiB) in 9m 11s, read: 130.7 MiB/s, write: 1.6 MiB/s
INFO:   2% (140.8 GiB of 6.9 TiB) in 18m 10s, read: 133.7 MiB/s, write: 0 B/s
INFO:   3% (211.2 GiB of 6.9 TiB) in 27m 20s, read: 131.1 MiB/s, write: 0 B/s
INFO:   4% (281.6 GiB of 6.9 TiB) in 36m 16s, read: 134.6 MiB/s, write: 0 B/s
INFO:   5% (352.0 GiB of 6.9 TiB) in 45m 31s, read: 129.7 MiB/s, write: 0 B/s
INFO:   6% (422.3 GiB of 6.9 TiB) in 55m 37s, read: 118.9 MiB/s, write: 0 B/s
INFO:   7% (492.7 GiB of 6.9 TiB) in 1h 5m 17s, read: 124.4 MiB/s, write: 0 B/s
INFO:   8% (563.1 GiB of 6.9 TiB) in 1h 15m 22s, read: 119.1 MiB/s, write: 0 B/s
INFO:   9% (633.5 GiB of 6.9 TiB) in 1h 24m 33s, read: 130.8 MiB/s, write: 29.7 KiB/s
INFO:  10% (703.9 GiB of 6.9 TiB) in 1h 33m 43s, read: 131.0 MiB/s, write: 7.4 KiB/s
INFO:  11% (774.3 GiB of 6.9 TiB) in 1h 42m 49s, read: 132.1 MiB/s, write: 0 B/s
INFO:  12% (844.6 GiB of 6.9 TiB) in 1h 52m 19s, read: 126.3 MiB/s, write: 0 B/s
INFO:  13% (914.9 GiB of 6.9 TiB) in 2h 1m 51s, read: 126.0 MiB/s, write: 0 B/s
INFO:  14% (985.4 GiB of 6.9 TiB) in 2h 11m 9s, read: 129.2 MiB/s, write: 0 B/s
INFO:  15% (1.0 TiB of 6.9 TiB) in 2h 21m 54s, read: 111.7 MiB/s, write: 889.1 KiB/s
INFO:  16% (1.1 TiB of 6.9 TiB) in 2h 31m 59s, read: 119.2 MiB/s, write: 1.2 MiB/s
INFO:  17% (1.2 TiB of 6.9 TiB) in 2h 41m 21s, read: 128.2 MiB/s, write: 1.2 MiB/s
INFO:  18% (1.2 TiB of 6.9 TiB) in 2h 51m 7s, read: 123.0 MiB/s, write: 1.0 MiB/s
INFO:  19% (1.3 TiB of 6.9 TiB) in 3h 35s, read: 127.0 MiB/s, write: 0 B/s
INFO:  20% (1.4 TiB of 6.9 TiB) in 3h 10m 24s, read: 122.2 MiB/s, write: 41.7 KiB/s
INFO:  21% (1.4 TiB of 6.9 TiB) in 3h 20m 16s, read: 121.9 MiB/s, write: 0 B/s
INFO:  22% (1.5 TiB of 6.9 TiB) in 3h 29m 17s, read: 133.2 MiB/s, write: 7.6 KiB/s
INFO:  23% (1.6 TiB of 6.9 TiB) in 3h 38m 37s, read: 128.6 MiB/s, write: 0 B/s


No i/o wait, no hight CPU, no other networktraffic on both hosts. Nothing to block here.

Source: pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve) client version: 1.0.8
Target: pve-manager/6.3-3/eee5f901 (running kernel: 5.4.103-1-pve) client version: 1.0.9, PBS Version 1.0-9

Both are with Subscription, but only the PBS Repo not. For this reason, the version is a little higher. It can't be the version, I just updated yesterday with the hope of improvement. Here are the VM-config:

Code:
agent: 1
bios: ovmf
boot: order=scsi0;net0
cores: 4
description: UCS  Memberserver Dateifreigaben SMB und NFS TFTP-Server- FTP-Server- Zentrales Logging mit Rsyslog- MariaDB Server- Apt-Proxy
efidisk0: SSD-vmdata:vm-118-disk-0,size=128K
hotplug: disk,network,usb,memory
memory: 3072
name: data.tux.lan
net0: virtio=C6:91:02:92:40:92,bridge=vmbr0
numa: 1
onboot: 1
ostype: l26
scsi0: SSD-vmdata:vm-118-disk-1,discard=on,size=30G,ssd=1
scsi1: SSD-vmdata:vm-118-disk-2,discard=on,size=8G,ssd=1
scsi2: HDD-vmdata:vm-118-disk-3,discard=on,size=7000G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=0383fdd0-db76-446f-85d7-1501b6cc634a
sockets: 1
startup: order=2,up=45


The strange thing is, it's not always like that. 2 days ago, it was done super fast. Here the Log from the PBS:
Code:
ProxmoxBackup Server 1.0-9
()
2021-03-11T08:30:03+01:00: starting new backup on datastore 'sicherung-data-vm': "vm/118/2021-03-11T07:30:02Z"
2021-03-11T08:30:03+01:00: download 'index.json.blob' from previous backup.
2021-03-11T08:30:03+01:00: register chunks in 'drive-efidisk0.img.fidx' from previous backup.
2021-03-11T08:30:03+01:00: download 'drive-efidisk0.img.fidx' from previous backup.
2021-03-11T08:30:03+01:00: created new fixed index 1 ("vm/118/2021-03-11T07:30:02Z/drive-efidisk0.img.fidx")
2021-03-11T08:30:03+01:00: register chunks in 'drive-scsi0.img.fidx' from previous backup.
2021-03-11T08:30:03+01:00: download 'drive-scsi0.img.fidx' from previous backup.
2021-03-11T08:30:03+01:00: created new fixed index 2 ("vm/118/2021-03-11T07:30:02Z/drive-scsi0.img.fidx")
2021-03-11T08:30:03+01:00: register chunks in 'drive-scsi1.img.fidx' from previous backup.
2021-03-11T08:30:03+01:00: download 'drive-scsi1.img.fidx' from previous backup.
2021-03-11T08:30:03+01:00: created new fixed index 3 ("vm/118/2021-03-11T07:30:02Z/drive-scsi1.img.fidx")
2021-03-11T08:30:03+01:00: register chunks in 'drive-scsi2.img.fidx' from previous backup.
2021-03-11T08:30:03+01:00: download 'drive-scsi2.img.fidx' from previous backup.
2021-03-11T08:30:04+01:00: created new fixed index 4 ("vm/118/2021-03-11T07:30:02Z/drive-scsi2.img.fidx")
2021-03-11T08:30:04+01:00: add blob "/backup-data/pbs-store2-sicherung/vm/118/2021-03-11T07:30:02Z/qemu-server.conf.blob" (565 bytes, comp: 565)
2021-03-11T08:31:32+01:00: Upload statistics for 'drive-scsi2.img.fidx'
2021-03-11T08:31:32+01:00: UUID: 7371b335c0294a96ad7594d7eef79e0d
2021-03-11T08:31:32+01:00: Checksum: 7782473926e04e612faa66efd8de458d777364b53832bbaf6a340815de5ccf69
2021-03-11T08:31:32+01:00: Size: 5007998976
2021-03-11T08:31:32+01:00: Chunk count: 1194
2021-03-11T08:31:32+01:00: Upload size: 4953473024 (98%)
2021-03-11T08:31:32+01:00: Duplicates: 13+4 (1%)
2021-03-11T08:31:32+01:00: Compression: 11%
2021-03-11T08:31:32+01:00: successfully closed fixed index 4
2021-03-11T08:31:32+01:00: Upload statistics for 'drive-scsi1.img.fidx'
2021-03-11T08:31:32+01:00: UUID: b761f32eec674b778dd37fdb04b98bd0
2021-03-11T08:31:32+01:00: Checksum: b3d0720b0de5cb3ccae68a1d8e9d83b428276d00501e954ef35850e72bbe94e7
2021-03-11T08:31:32+01:00: Size: 0
2021-03-11T08:31:32+01:00: Chunk count: 0
2021-03-11T08:31:32+01:00: successfully closed fixed index 3
2021-03-11T08:31:34+01:00: Upload statistics for 'drive-scsi0.img.fidx'
2021-03-11T08:31:34+01:00: UUID: bb6488e2c5c3401ebfe6385dfa0367e5
2021-03-11T08:31:34+01:00: Checksum: ee071217797945d322287337530b49f1ab4ea23b2d35283440f7e51fb18820bc
2021-03-11T08:31:34+01:00: Size: 4102029312
2021-03-11T08:31:34+01:00: Chunk count: 978
2021-03-11T08:31:34+01:00: Upload size: 3997171712 (97%)
2021-03-11T08:31:34+01:00: Duplicates: 25+4 (2%)
2021-03-11T08:31:34+01:00: Compression: 20%
2021-03-11T08:31:34+01:00: successfully closed fixed index 2
2021-03-11T08:31:34+01:00: Upload statistics for 'drive-efidisk0.img.fidx'
2021-03-11T08:31:34+01:00: UUID: 5c482d8bd55e47b6a2c5a9038dcb33d6
2021-03-11T08:31:34+01:00: Checksum: 12351999633754e92691d3263bd3ae30ddd1d0a69778e4bd1df28f0fda9b8d31
2021-03-11T08:31:34+01:00: Size: 0
2021-03-11T08:31:34+01:00: Chunk count: 0
2021-03-11T08:31:34+01:00: successfully closed fixed index 1
2021-03-11T08:31:34+01:00: add blob "/backup-data/pbs-store2-sicherung/vm/118/2021-03-11T07:30:02Z/index.json.blob" (596 bytes, comp: 596)
2021-03-11T08:31:34+01:00: successfully finished backup
2021-03-11T08:31:34+01:00: backup finished successfully
2021-03-11T08:31:34+01:00: TASK OK

At the same time, now, I can send the full load to the backup server, for example an ISO. What can be going on there?

Thanks a lot.
 
As you see in the logs there is no writing. You read with a speed of ~120-130MB/s.
You have 7TB of data, so do the math... Reading 7TB at 130mb/s. means around 16 hours just read...
You have to understand that the backup is incremental. It have to READ all the data , compare it , write new data ....

Your log is not complete. And the end there is something about how much data could reused and was written.

you can have 3 bottlenecks with PBS...

1) network connection between pbs and pve... if you only have 1GBIT, 120-130MB/s. sounds quite normal
2) drives of pve ... if you have only 1 disk , 120-130MB/s. read speed sounds quite normal
3) drives of pbs ... if you have only 1 disk (maybe usb3.0 drive) , 120-130MB/s. read speed sounds quite normal


this is how pbs works. if you want faster backups of huge data daily or even hourly, take a look at some enterprise backup software from acronis or competitor... this will save you alot of time. they have a different approach to backup and dont read all the data from the disk. correct me, if iam wrong.
 
Last edited:
As you see in the logs there is no writing. You read with a speed of ~120-130MB/s.
You have 7TB of data, so do the math... Reading 7TB at 130mb/s. means around 16 hours just read...
Normaly not. Because this is running since Christmas. And time for the Backup was always only the difference. So mostly about 50 minutes.
You have to understand that the backup is incremental. It have to READ all the data , compare it , write new data ....
PBS works differnetly, otherwise it would make no sense, then a backup with a lot of memory would again take a very long time.
Your log is not complete. And the end there is something about how much data could reused and was written.
Yes, I'll provide that when it's ready backuped.
you can have 3 bottlenecks with PBS...

1) network connection between pbs and pve... if you only have 1GBIT, 120-130MB/s. sounds quite normal
Yes i have only Gigabit.
2) drives of pve ... if you have only 1 disk , 120-130MB/s. read speed sounds quite normal
I agree.
3) drives of pbs ... if you have only 1 disk (maybe usb3.0 drive) , 120-130MB/s. read speed sounds quite normal
Source is an ZFS Raid10 with 10 WD RED Pro HDD's, and for the OS an ZFS Raid1 with to Intel Enterprise SSD's.
this is how pbs works. if you want faster backups of huge data daily or even hourly, take a look at some enterprise backup software from acronis or competitor... this will save you alot of time. they have a different approach to backup and dont read all the data from the disk. correct me, if iam wrong.

As I mentioned above. Under normal conditions, this process takes only a very short time. That's why I find it strange why is suddenly writes so slowly. Partially under 1MB/s. And there is no load on the server. As you can see from the second log, it took less than 2 minutes on 11.3.

:) Very Thanks.
 
PBS works differnetly, otherwise it would make no sense, then a backup with a lot of memory would again take a very long time.


As I mentioned above. Under normal conditions, this process takes only a very short time. That's why I find it strange why is suddenly writes so slowly. Partially under 1MB/s. And there is no load on the server. As you can see from the second log, it took less than 2 minutes on 11.3.

I was playing around with pbs a few weeks ago and i had the same problem. i used daily backups and the reading process was endless. i look up in the docs but there was nothing about how to configure pbs to not read the whole data and instead just search for timestamp changes or whatever.

maybe its not normal. lol.

according to the log you posted it seems that your whole data is read, thats why it takes so long.

INFO: 4% (281.6 GiB of 6.9 TiB) in 36m 16s, read: 134.6 MiB/s, write: 0 B/s

= 281,6gb with ~134,6mb/s. = around 33min .... seems legit. no changes found = no writing ;-).

i am still using acronis backup for now. but if someone knows how to configure pbs to not read all the data again and again.. i am interesting too ;)
 
Hi,

do you have stopped your vm since the last fast backup ?
because, if yes, the backup job need to reread all disks, but only write diff to pbs. (that's could explain the write 0B/s)

backup occur with 64kb block, so maybe it take time to compare with chunks in pbs.
 
Normaly not. Because this is running since Christmas. And time for the Backup was always only the difference. So mostly about 50 minutes.
Looking at the log you sent, this backup is a "full" backup, not differential, at least full-re-read of the VM (as other suggested).
It's not a differential and fast backup using dirty maps (that are destroyed on VM restart);

Your full backup:
Code:
INFO: efidisk0: dirty-bitmap status: created new
INFO: scsi0: dirty-bitmap status: created new
INFO: scsi1: dirty-bitmap status: created new
INFO: scsi2: dirty-bitmap status: created new

A backup of one of my VM, differential using dirty maps:
Code:
INFO: scsi0: dirty-bitmap status: OK (2.3 GiB of 60.0 GiB dirty)
INFO: using fast incremental mode (dirty-bitmap), 2.3 GiB dirty of 60.0 GiB total
 
Last edited:
ok... hmm... i understand, but why it should run a completly new fullbackup? There are over 20 Versions @PBS Store.
 
As Spirit said above, it needs to re-read the whole VM (and it does that quick, 130 MBps).
Then write only the diffs to PBS.

On next backup (with dirty maps working), it won't have to re-read the whole VM, it'll use the dirty maps.
 
ok... hmm... i understand, but why it should run a completly new fullbackup? There are over 20 Versions @PBS Store.
It loses the fast "dirty-bitmap" where we can keep track of changes to specific disk blocks on VM stop, that's why PBS needs to read the whole disk to be sure it misses no chunks. It still will upload only those which are not yet on the PBS server.
 
Hmm...

what is a helpfull ideal solution on that ? with acronis you can restart as often as you want and it will not read the whole disk/backup again.

let a fileserver lxc handle the data and mount it over network ? windows vms needs a restart on mostly any update ... (yes windows...)
 
windows vms needs a restart on mostly any update ... (yes windows...)
Note that the dirty-bitmap tracking stays intact even if the VM's OS does a reboot, so a normal "reboot windows from the inside" is not a problem here. On VM live migration it can be kept too, so in our production clusters, where production VMs tend to stay, well, running, and we can use live migration and in-guest reboot we can keep the dirty-bitmap and thus very fast backups pretty much forever.
There are theoretical other mitigations, but they have some risk (e.g., if disk is "touched" manually by another process).
Note also that the whole backup is not read again, we can get the chunk list cheaper than that.
 
  • Like
Reactions: starlight
Maybe it could be great to be able to dump the dirty-bitmap to disk and reload it, for some secure actions, like reboot ?
On the roadmap, but currently not possible with qemu on all storage types.

Already working: keep dirty-bitmaps on live-migrations
 
Ok, it is ready now. Here is the whole Backuplog:

Code:
118: 2021-03-13 15:54:33 INFO: Starting Backup of VM 118 (qemu)
118: 2021-03-13 15:54:33 INFO: status = running
118: 2021-03-13 15:54:33 INFO: VM Name: data.tux.lan
118: 2021-03-13 15:54:33 INFO: include disk 'scsi0' 'SSD-vmdata:vm-118-disk-1' 30G
118: 2021-03-13 15:54:33 INFO: include disk 'scsi1' 'SSD-vmdata:vm-118-disk-2' 8G
118: 2021-03-13 15:54:33 INFO: include disk 'scsi2' 'HDD-vmdata:vm-118-disk-3' 7000G
118: 2021-03-13 15:54:33 INFO: include disk 'efidisk0' 'SSD-vmdata:vm-118-disk-0' 128K
118: 2021-03-13 15:54:33 INFO: backup mode: snapshot
118: 2021-03-13 15:54:33 INFO: ionice priority: 7
118: 2021-03-13 15:54:33 INFO: creating Proxmox Backup Server archive 'vm/118/2021-03-13T14:54:33Z'
118: 2021-03-13 15:54:34 INFO: issuing guest-agent 'fs-freeze' command
118: 2021-03-13 15:54:34 INFO: enabling encryption
118: 2021-03-13 15:54:37 INFO: issuing guest-agent 'fs-thaw' command
118: 2021-03-13 15:54:37 INFO: started backup task 'eace5c4d-44fd-4d39-b28d-a183528099be'
118: 2021-03-13 15:54:37 INFO: resuming VM again
118: 2021-03-13 15:54:37 INFO: efidisk0: dirty-bitmap status: existing bitmap was invalid and has been cleared
118: 2021-03-13 15:54:37 INFO: scsi0: dirty-bitmap status: existing bitmap was invalid and has been cleared
118: 2021-03-13 15:54:37 INFO: scsi1: dirty-bitmap status: existing bitmap was invalid and has been cleared
118: 2021-03-13 15:54:37 INFO: scsi2: dirty-bitmap status: existing bitmap was invalid and has been cleared
118: 2021-03-13 15:54:40 INFO:   0% (432.0 MiB of 6.9 TiB) in 3s, read: 144.0 MiB/s, write: 6.7 MiB/s
118: 2021-03-13 16:05:04 INFO:   1% (70.5 GiB of 6.9 TiB) in 10m 27s, read: 114.9 MiB/s, write: 1.9 MiB/s
118: 2021-03-13 16:15:04 INFO:   2% (140.8 GiB of 6.9 TiB) in 20m 27s, read: 120.0 MiB/s, write: 518.8 KiB/s
118: 2021-03-13 16:25:13 INFO:   3% (211.2 GiB of 6.9 TiB) in 30m 36s, read: 118.5 MiB/s, write: 40.4 KiB/s
118: 2021-03-13 16:35:24 INFO:   4% (281.5 GiB of 6.9 TiB) in 40m 47s, read: 117.8 MiB/s, write: 40.2 KiB/s
118: 2021-03-13 16:45:49 INFO:   5% (351.9 GiB of 6.9 TiB) in 51m 12s, read: 115.3 MiB/s, write: 45.9 KiB/s
118: 2021-03-13 16:57:01 INFO:   6% (422.4 GiB of 6.9 TiB) in 1h 2m 24s, read: 107.4 MiB/s, write: 1.2 MiB/s
118: 2021-03-13 17:07:49 INFO:   7% (492.7 GiB of 6.9 TiB) in 1h 13m 12s, read: 111.2 MiB/s, write: 37.9 KiB/s
118: 2021-03-13 17:18:58 INFO:   8% (563.1 GiB of 6.9 TiB) in 1h 24m 21s, read: 107.7 MiB/s, write: 30.6 KiB/s
118: 2021-03-13 17:29:28 INFO:   9% (633.5 GiB of 6.9 TiB) in 1h 34m 51s, read: 114.5 MiB/s, write: 52.0 KiB/s
118: 2021-03-13 17:39:51 INFO:  10% (703.9 GiB of 6.9 TiB) in 1h 45m 14s, read: 115.7 MiB/s, write: 6.6 KiB/s
118: 2021-03-13 17:50:12 INFO:  11% (774.3 GiB of 6.9 TiB) in 1h 55m 35s, read: 116.2 MiB/s, write: 0 B/s
118: 2021-03-13 18:00:55 INFO:  12% (844.6 GiB of 6.9 TiB) in 2h 6m 18s, read: 111.9 MiB/s, write: 0 B/s
118: 2021-03-13 18:11:49 INFO:  13% (914.9 GiB of 6.9 TiB) in 2h 17m 12s, read: 110.1 MiB/s, write: 0 B/s
118: 2021-03-13 18:22:29 INFO:  14% (985.4 GiB of 6.9 TiB) in 2h 27m 52s, read: 112.7 MiB/s, write: 0 B/s
118: 2021-03-13 18:34:59 INFO:  15% (1.0 TiB of 6.9 TiB) in 2h 40m 22s, read: 96.0 MiB/s, write: 671.7 KiB/s
118: 2021-03-13 18:46:34 INFO:  16% (1.1 TiB of 6.9 TiB) in 2h 51m 57s, read: 103.7 MiB/s, write: 960.6 KiB/s
118: 2021-03-13 18:57:28 INFO:  17% (1.2 TiB of 6.9 TiB) in 3h 2m 51s, read: 110.3 MiB/s, write: 1008.3 KiB/s
118: 2021-03-13 19:08:45 INFO:  18% (1.2 TiB of 6.9 TiB) in 3h 14m 8s, read: 106.5 MiB/s, write: 871.2 KiB/s
118: 2021-03-13 19:19:33 INFO:  19% (1.3 TiB of 6.9 TiB) in 3h 24m 56s, read: 111.2 MiB/s, write: 0 B/s
118: 2021-03-13 19:30:47 INFO:  20% (1.4 TiB of 6.9 TiB) in 3h 36m 10s, read: 106.9 MiB/s, write: 24.3 KiB/s
118: 2021-03-13 19:41:53 INFO:  21% (1.4 TiB of 6.9 TiB) in 3h 47m 16s, read: 108.1 MiB/s, write: 0 B/s
118: 2021-03-13 19:52:37 INFO:  22% (1.5 TiB of 6.9 TiB) in 3h 58m, read: 112.0 MiB/s, write: 6.4 KiB/s
118: 2021-03-13 20:03:40 INFO:  23% (1.6 TiB of 6.9 TiB) in 4h 9m 3s, read: 108.6 MiB/s, write: 12.4 KiB/s
118: 2021-03-13 20:14:42 INFO:  24% (1.6 TiB of 6.9 TiB) in 4h 20m 5s, read: 109.0 MiB/s, write: 18.6 KiB/s
118: 2021-03-13 20:25:48 INFO:  25% (1.7 TiB of 6.9 TiB) in 4h 31m 11s, read: 108.1 MiB/s, write: 24.6 KiB/s
118: 2021-03-13 20:37:19 INFO:  26% (1.8 TiB of 6.9 TiB) in 4h 42m 42s, read: 104.3 MiB/s, write: 23.7 KiB/s
118: 2021-03-13 20:48:17 INFO:  27% (1.9 TiB of 6.9 TiB) in 4h 53m 40s, read: 109.5 MiB/s, write: 118.3 KiB/s
118: 2021-03-13 20:59:03 INFO:  28% (1.9 TiB of 6.9 TiB) in 5h 4m 26s, read: 111.7 MiB/s, write: 88.8 KiB/s
118: 2021-03-13 21:10:40 INFO:  29% (2.0 TiB of 6.9 TiB) in 5h 16m 3s, read: 103.3 MiB/s, write: 3.5 MiB/s
118: 2021-03-13 21:21:54 INFO:  30% (2.1 TiB of 6.9 TiB) in 5h 27m 17s, read: 107.0 MiB/s, write: 546.9 KiB/s
118: 2021-03-13 21:33:23 INFO:  31% (2.1 TiB of 6.9 TiB) in 5h 38m 46s, read: 104.6 MiB/s, write: 0 B/s
118: 2021-03-13 21:44:12 INFO:  32% (2.2 TiB of 6.9 TiB) in 5h 49m 35s, read: 111.1 MiB/s, write: 0 B/s
118: 2021-03-13 21:55:13 INFO:  33% (2.3 TiB of 6.9 TiB) in 6h 36s, read: 108.9 MiB/s, write: 0 B/s
118: 2021-03-13 22:05:57 INFO:  34% (2.3 TiB of 6.9 TiB) in 6h 11m 20s, read: 112.0 MiB/s, write: 559.7 KiB/s
118: 2021-03-13 22:17:39 INFO:  35% (2.4 TiB of 6.9 TiB) in 6h 23m 2s, read: 102.6 MiB/s, write: 566.0 KiB/s
118: 2021-03-13 22:29:05 INFO:  36% (2.5 TiB of 6.9 TiB) in 6h 34m 28s, read: 105.0 MiB/s, write: 17.9 KiB/s
118: 2021-03-13 22:40:42 INFO:  37% (2.5 TiB of 6.9 TiB) in 6h 46m 5s, read: 103.6 MiB/s, write: 23.5 KiB/s
118: 2021-03-13 22:52:32 INFO:  38% (2.6 TiB of 6.9 TiB) in 6h 57m 55s, read: 101.4 MiB/s, write: 0 B/s
118: 2021-03-13 23:04:21 INFO:  39% (2.7 TiB of 6.9 TiB) in 7h 9m 44s, read: 101.7 MiB/s, write: 733.7 KiB/s
118: 2021-03-13 23:16:20 INFO:  40% (2.7 TiB of 6.9 TiB) in 7h 21m 43s, read: 100.2 MiB/s, write: 22.8 KiB/s
118: 2021-03-13 23:28:47 INFO:  41% (2.8 TiB of 6.9 TiB) in 7h 34m 10s, read: 96.5 MiB/s, write: 0 B/s
118: 2021-03-13 23:42:08 INFO:  42% (2.9 TiB of 6.9 TiB) in 7h 47m 31s, read: 90.0 MiB/s, write: 0 B/s
118: 2021-03-13 23:56:13 INFO:  43% (3.0 TiB of 6.9 TiB) in 8h 1m 36s, read: 85.3 MiB/s, write: 4.8 KiB/s
118: 2021-03-14 00:08:27 INFO:  44% (3.0 TiB of 6.9 TiB) in 8h 13m 50s, read: 98.2 MiB/s, write: 0 B/s
118: 2021-03-14 00:21:53 INFO:  45% (3.1 TiB of 6.9 TiB) in 8h 27m 16s, read: 89.4 MiB/s, write: 0 B/s
118: 2021-03-14 00:59:56 INFO:  46% (3.2 TiB of 6.9 TiB) in 9h 5m 19s, read: 31.5 MiB/s, write: 3.6 KiB/s
118: 2021-03-14 02:34:03 INFO:  47% (3.2 TiB of 6.9 TiB) in 10h 39m 26s, read: 12.8 MiB/s, write: 5.8 KiB/s
118: 2021-03-14 03:18:14 INFO:  48% (3.3 TiB of 6.9 TiB) in 11h 23m 37s, read: 27.2 MiB/s, write: 0 B/s
118: 2021-03-14 04:04:41 INFO:  49% (3.4 TiB of 6.9 TiB) in 12h 10m 4s, read: 25.9 MiB/s, write: 0 B/s
118: 2021-03-14 05:16:28 INFO:  50% (3.4 TiB of 6.9 TiB) in 13h 21m 51s, read: 16.7 MiB/s, write: 18.1 KiB/s
118: 2021-03-14 05:34:21 INFO:  51% (3.5 TiB of 6.9 TiB) in 13h 39m 44s, read: 67.2 MiB/s, write: 0 B/s
118: 2021-03-14 05:44:36 INFO:  52% (3.6 TiB of 6.9 TiB) in 13h 49m 59s, read: 117.1 MiB/s, write: 0 B/s
118: 2021-03-14 05:54:54 INFO:  53% (3.6 TiB of 6.9 TiB) in 14h 17s, read: 116.6 MiB/s, write: 0 B/s
118: 2021-03-14 06:05:02 INFO:  54% (3.7 TiB of 6.9 TiB) in 14h 10m 25s, read: 118.6 MiB/s, write: 0 B/s
118: 2021-03-14 06:14:37 INFO:  55% (3.8 TiB of 6.9 TiB) in 14h 20m, read: 125.4 MiB/s, write: 0 B/s
118: 2021-03-14 06:24:55 INFO:  56% (3.8 TiB of 6.9 TiB) in 14h 30m 18s, read: 116.6 MiB/s, write: 0 B/s
118: 2021-03-14 06:34:44 INFO:  57% (3.9 TiB of 6.9 TiB) in 14h 40m 7s, read: 122.3 MiB/s, write: 27.8 KiB/s
118: 2021-03-14 06:44:25 INFO:  58% (4.0 TiB of 6.9 TiB) in 14h 49m 48s, read: 124.2 MiB/s, write: 0 B/s
118: 2021-03-14 06:53:45 INFO:  59% (4.1 TiB of 6.9 TiB) in 14h 59m 8s, read: 128.6 MiB/s, write: 0 B/s
118: 2021-03-14 07:03:19 INFO:  60% (4.1 TiB of 6.9 TiB) in 15h 8m 42s, read: 125.5 MiB/s, write: 0 B/s
118: 2021-03-14 07:12:59 INFO:  61% (4.2 TiB of 6.9 TiB) in 15h 18m 22s, read: 124.2 MiB/s, write: 0 B/s
118: 2021-03-14 07:22:54 INFO:  62% (4.3 TiB of 6.9 TiB) in 15h 28m 17s, read: 121.2 MiB/s, write: 0 B/s
118: 2021-03-14 07:32:36 INFO:  63% (4.3 TiB of 6.9 TiB) in 15h 37m 59s, read: 123.8 MiB/s, write: 0 B/s
118: 2021-03-14 07:42:43 INFO:  64% (4.4 TiB of 6.9 TiB) in 15h 48m 6s, read: 118.8 MiB/s, write: 0 B/s
118: 2021-03-14 07:52:59 INFO:  65% (4.5 TiB of 6.9 TiB) in 15h 58m 22s, read: 117.0 MiB/s, write: 26.6 KiB/s
118: 2021-03-14 08:03:13 INFO:  66% (4.5 TiB of 6.9 TiB) in 16h 8m 36s, read: 117.2 MiB/s, write: 6.7 KiB/s
118: 2021-03-14 08:13:29 INFO:  67% (4.6 TiB of 6.9 TiB) in 16h 18m 52s, read: 117.0 MiB/s, write: 13.3 KiB/s
118: 2021-03-14 08:23:51 INFO:  68% (4.7 TiB of 6.9 TiB) in 16h 29m 14s, read: 115.9 MiB/s, write: 0 B/s
118: 2021-03-14 08:34:13 INFO:  69% (4.7 TiB of 6.9 TiB) in 16h 39m 36s, read: 115.8 MiB/s, write: 0 B/s
118: 2021-03-14 08:44:30 INFO:  70% (4.8 TiB of 6.9 TiB) in 16h 49m 53s, read: 117.1 MiB/s, write: 0 B/s
118: 2021-03-14 08:54:50 INFO:  71% (4.9 TiB of 6.9 TiB) in 17h 13s, read: 116.0 MiB/s, write: 0 B/s
118: 2021-03-14 09:05:14 INFO:  72% (4.9 TiB of 6.9 TiB) in 17h 10m 37s, read: 115.6 MiB/s, write: 0 B/s
118: 2021-03-14 09:15:25 INFO:  73% (5.0 TiB of 6.9 TiB) in 17h 20m 48s, read: 117.9 MiB/s, write: 0 B/s
118: 2021-03-14 09:25:33 INFO:  74% (5.1 TiB of 6.9 TiB) in 17h 30m 56s, read: 118.5 MiB/s, write: 0 B/s
118: 2021-03-14 09:35:44 INFO:  75% (5.2 TiB of 6.9 TiB) in 17h 41m 7s, read: 118.0 MiB/s, write: 0 B/s
118: 2021-03-14 09:46:16 INFO:  76% (5.2 TiB of 6.9 TiB) in 17h 51m 39s, read: 113.9 MiB/s, write: 110.2 KiB/s
118: 2021-03-14 09:57:03 INFO:  77% (5.3 TiB of 6.9 TiB) in 18h 2m 26s, read: 111.4 MiB/s, write: 0 B/s
118: 2021-03-14 10:07:23 INFO:  78% (5.4 TiB of 6.9 TiB) in 18h 12m 46s, read: 116.3 MiB/s, write: 0 B/s
118: 2021-03-14 10:17:39 INFO:  79% (5.4 TiB of 6.9 TiB) in 18h 23m 2s, read: 117.0 MiB/s, write: 0 B/s
118: 2021-03-14 10:28:01 INFO:  80% (5.5 TiB of 6.9 TiB) in 18h 33m 24s, read: 115.9 MiB/s, write: 0 B/s
118: 2021-03-14 10:38:38 INFO:  81% (5.6 TiB of 6.9 TiB) in 18h 44m 1s, read: 113.2 MiB/s, write: 0 B/s
118: 2021-03-14 10:49:23 INFO:  82% (5.6 TiB of 6.9 TiB) in 18h 54m 46s, read: 111.7 MiB/s, write: 0 B/s
118: 2021-03-14 11:00:50 INFO:  83% (5.7 TiB of 6.9 TiB) in 19h 6m 13s, read: 104.9 MiB/s, write: 0 B/s
118: 2021-03-14 11:11:39 INFO:  84% (5.8 TiB of 6.9 TiB) in 19h 17m 2s, read: 110.9 MiB/s, write: 0 B/s
118: 2021-03-14 11:22:54 INFO:  85% (5.8 TiB of 6.9 TiB) in 19h 28m 17s, read: 106.8 MiB/s, write: 0 B/s
118: 2021-03-14 11:34:21 INFO:  86% (5.9 TiB of 6.9 TiB) in 19h 39m 44s, read: 105.0 MiB/s, write: 244.4 KiB/s
118: 2021-03-14 11:45:45 INFO:  87% (6.0 TiB of 6.9 TiB) in 19h 51m 8s, read: 105.3 MiB/s, write: 18.0 KiB/s
118: 2021-03-14 11:56:26 INFO:  88% (6.0 TiB of 6.9 TiB) in 20h 1m 49s, read: 112.8 MiB/s, write: 19.2 KiB/s
118: 2021-03-14 12:01:50 INFO:  89% (6.1 TiB of 6.9 TiB) in 20h 7m 13s, read: 223.1 MiB/s, write: 37.9 KiB/s
118: 2021-03-14 12:02:59 INFO:  90% (6.2 TiB of 6.9 TiB) in 20h 8m 22s, read: 1.0 GiB/s, write: 0 B/s
118: 2021-03-14 12:05:06 INFO:  91% (6.3 TiB of 6.9 TiB) in 20h 10m 29s, read: 564.9 MiB/s, write: 0 B/s
118: 2021-03-14 12:06:16 INFO:  92% (6.3 TiB of 6.9 TiB) in 20h 11m 39s, read: 1.0 GiB/s, write: 0 B/s
118: 2021-03-14 12:07:26 INFO:  93% (6.4 TiB of 6.9 TiB) in 20h 12m 49s, read: 1019.4 MiB/s, write: 0 B/s
118: 2021-03-14 12:08:36 INFO:  94% (6.5 TiB of 6.9 TiB) in 20h 13m 59s, read: 1.0 GiB/s, write: 0 B/s
118: 2021-03-14 12:09:44 INFO:  95% (6.5 TiB of 6.9 TiB) in 20h 15m 7s, read: 1.0 GiB/s, write: 0 B/s
118: 2021-03-14 12:10:54 INFO:  96% (6.6 TiB of 6.9 TiB) in 20h 16m 17s, read: 1.0 GiB/s, write: 0 B/s
118: 2021-03-14 12:12:03 INFO:  97% (6.7 TiB of 6.9 TiB) in 20h 17m 26s, read: 1.0 GiB/s, write: 0 B/s
118: 2021-03-14 12:13:10 INFO:  98% (6.7 TiB of 6.9 TiB) in 20h 18m 33s, read: 1.0 GiB/s, write: 0 B/s
118: 2021-03-14 12:14:19 INFO:  99% (6.8 TiB of 6.9 TiB) in 20h 19m 42s, read: 1.0 GiB/s, write: 0 B/s
118: 2021-03-14 12:17:21 INFO: 100% (6.9 TiB of 6.9 TiB) in 20h 22m 44s, read: 394.3 MiB/s, write: 34.3 MiB/s
118: 2021-03-14 12:17:22 INFO: backup is sparse: 1.11 TiB (16%) total zero data
118: 2021-03-14 12:17:22 INFO: backup was done incrementally, reused 6.86 TiB (99%)
118: 2021-03-14 12:17:22 INFO: transferred 6.87 TiB in 73365 seconds (98.2 MiB/s)
118: 2021-03-14 12:17:22 INFO: Finished Backup of VM 118 (20:22:49)

So, If I understand correctly, VM shutdown and power off always means a complete read of the backup, right?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!