I have a small cluster of 2 PVE nodes in a cluster and a PBS as a VM on a different host with large storage at hand.
I've configured, I think, a basic setup with a datastore, some users and permissions, some basic pruning rules and the PVE set up to backup some of it's VMs, daily, in snapshot mode.
I have some VMs running, on NFS storage or local directory storage (qcow2 both). These seem to be backuped up by effectively copying a subset of their disk (ie detected as "dirty").
One of the VMs is a Windows machine (not sure if relevant), stored on local LVM and having a manual snapshot. This VM has been stopped for several days now (reasons unimportant). Every day the backup job runs, it copies 100% of the disk, even though the VM hasn't changed in the interim (ie. not started).
Is there a way to do this in a more efficient way from the quantity of data over the network perspective?
In this case there is no no change in the source, hnce in my logic there wouldn't be the need to copy the whole disk each time.
I've configured, I think, a basic setup with a datastore, some users and permissions, some basic pruning rules and the PVE set up to backup some of it's VMs, daily, in snapshot mode.
I have some VMs running, on NFS storage or local directory storage (qcow2 both). These seem to be backuped up by effectively copying a subset of their disk (ie detected as "dirty").
One of the VMs is a Windows machine (not sure if relevant), stored on local LVM and having a manual snapshot. This VM has been stopped for several days now (reasons unimportant). Every day the backup job runs, it copies 100% of the disk, even though the VM hasn't changed in the interim (ie. not started).
Code:
107: 2026-02-28 21:00:05 INFO: Starting Backup of VM 107 (qemu)
107: 2026-02-28 21:00:05 INFO: status = stopped
107: 2026-02-28 21:00:05 INFO: backup mode: stop
107: 2026-02-28 21:00:05 INFO: ionice priority: 7
107: 2026-02-28 21:00:05 INFO: VM Name: XXXXXXXXXXX
107: 2026-02-28 21:00:05 INFO: include disk 'scsi0' 'local-lvm:vm-107-disk-0' 100G
107: 2026-02-28 21:00:05 INFO: snapshots found (not included into backup)
107: 2026-02-28 21:00:05 INFO: creating Proxmox Backup Server archive 'vm/107/2026-02-28T19:00:05Z'
107: 2026-02-28 21:00:05 INFO: starting kvm to execute backup task
107: 2026-02-28 21:00:07 INFO: started backup task '05b244b5-6dda-4144-9d5e-810737a70d33'
107: 2026-02-28 21:00:07 INFO: scsi0: dirty-bitmap status: created new
107: 2026-02-28 21:00:10 INFO: 0% (820.0 MiB of 100.0 GiB) in 3s, read: 273.3 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:13 INFO: 1% (1.6 GiB of 100.0 GiB) in 6s, read: 268.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:16 INFO: 2% (2.4 GiB of 100.0 GiB) in 9s, read: 268.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:19 INFO: 3% (3.2 GiB of 100.0 GiB) in 12s, read: 266.7 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:23 INFO: 4% (4.2 GiB of 100.0 GiB) in 16s, read: 267.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:27 INFO: 5% (5.2 GiB of 100.0 GiB) in 20s, read: 268.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:30 INFO: 6% (6.0 GiB of 100.0 GiB) in 23s, read: 268.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:34 INFO: 7% (7.1 GiB of 100.0 GiB) in 27s, read: 268.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:38 INFO: 8% (8.1 GiB of 100.0 GiB) in 31s, read: 271.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:42 INFO: 9% (9.2 GiB of 100.0 GiB) in 35s, read: 273.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:46 INFO: 10% (10.3 GiB of 100.0 GiB) in 39s, read: 271.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:49 INFO: 11% (11.1 GiB of 100.0 GiB) in 42s, read: 273.3 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:53 INFO: 12% (12.1 GiB of 100.0 GiB) in 46s, read: 271.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:00:57 INFO: 13% (13.2 GiB of 100.0 GiB) in 50s, read: 267.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:01 INFO: 15% (15.1 GiB of 100.0 GiB) in 54s, read: 509.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:04 INFO: 16% (16.2 GiB of 100.0 GiB) in 57s, read: 348.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:07 INFO: 19% (19.2 GiB of 100.0 GiB) in 1m, read: 1.0 GiB/s, write: 0 B/s
107: 2026-02-28 21:01:10 INFO: 20% (20.0 GiB of 100.0 GiB) in 1m 3s, read: 265.3 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:14 INFO: 21% (21.0 GiB of 100.0 GiB) in 1m 7s, read: 267.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:18 INFO: 22% (22.1 GiB of 100.0 GiB) in 1m 11s, read: 266.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:22 INFO: 23% (23.1 GiB of 100.0 GiB) in 1m 15s, read: 267.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:26 INFO: 24% (24.2 GiB of 100.0 GiB) in 1m 19s, read: 263.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:30 INFO: 25% (25.2 GiB of 100.0 GiB) in 1m 23s, read: 264.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:34 INFO: 26% (26.2 GiB of 100.0 GiB) in 1m 27s, read: 266.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:37 INFO: 27% (27.0 GiB of 100.0 GiB) in 1m 30s, read: 268.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:41 INFO: 28% (28.1 GiB of 100.0 GiB) in 1m 34s, read: 270.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:45 INFO: 29% (29.1 GiB of 100.0 GiB) in 1m 38s, read: 268.0 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:50 INFO: 32% (32.1 GiB of 100.0 GiB) in 1m 43s, read: 620.8 MiB/s, write: 0 B/s
107: 2026-02-28 21:01:53 INFO: 42% (42.0 GiB of 100.0 GiB) in 1m 46s, read: 3.3 GiB/s, write: 0 B/s
107: 2026-02-28 21:01:56 INFO: 53% (53.1 GiB of 100.0 GiB) in 1m 49s, read: 3.7 GiB/s, write: 0 B/s
107: 2026-02-28 21:01:59 INFO: 59% (59.5 GiB of 100.0 GiB) in 1m 52s, read: 2.2 GiB/s, write: 0 B/s
107: 2026-02-28 21:02:02 INFO: 66% (66.0 GiB of 100.0 GiB) in 1m 55s, read: 2.2 GiB/s, write: 0 B/s
107: 2026-02-28 21:02:05 INFO: 76% (76.8 GiB of 100.0 GiB) in 1m 58s, read: 3.6 GiB/s, write: 0 B/s
107: 2026-02-28 21:02:08 INFO: 83% (83.7 GiB of 100.0 GiB) in 2m 1s, read: 2.3 GiB/s, write: 0 B/s
107: 2026-02-28 21:02:11 INFO: 94% (94.6 GiB of 100.0 GiB) in 2m 4s, read: 3.6 GiB/s, write: 0 B/s
107: 2026-02-28 21:02:13 INFO: 100% (100.0 GiB of 100.0 GiB) in 2m 6s, read: 2.7 GiB/s, write: 0 B/s
107: 2026-02-28 21:02:13 INFO: backup is sparse: 73.19 GiB (73%) total zero data
107: 2026-02-28 21:02:13 INFO: backup was done incrementally, reused 100.00 GiB (100%)
107: 2026-02-28 21:02:13 INFO: transferred 100.00 GiB in 126 seconds (812.7 MiB/s)
107: 2026-02-28 21:02:13 INFO: stopping kvm after backup task
107: 2026-02-28 21:02:15 INFO: adding notes to backup
107: 2026-02-28 21:02:15 INFO: Finished Backup of VM 107 (00:02:10)
Is there a way to do this in a more efficient way from the quantity of data over the network perspective?
In this case there is no no change in the source, hnce in my logic there wouldn't be the need to copy the whole disk each time.
Last edited: