[SOLVED] Backup dauert sehr lange. Optimierung möglich?

gno

New Member
Sep 4, 2023
17
1
3
Mein Setup sieht folgendermaßen aus:

Eine WD Red Plus 4TB SATA mit dem PVE
In local-lvm liegen 2 LXC Container
Fileserver mit ca. 400GB Daten​
Nextcloud mit ca. 270GB Daten​

Zwei gespiegelte Samsung SSD PM893 480GB SATA
Darauf liegen 16 'kleinere' Container, die selbst keine großen Datenmengen erzeugen
PBS, DHCP, DNS, Proxy, Mail, Datenbank, Keycloak…​

Eine weitere WD Red Plus 4TB SATA dient als Backup-Platte. Sie ist an den PBS Container per BindMount durchgereicht.


Das Backup der 400GB des Fileservers dauert ca. 2, 5 Stunden, das der 270GB der Nextcloud ca. 2 Stunden, obwohl es keine großen Veränderungen der Daten zum vorherigen Backup gibt. Das erscheint mir doch ziemlich lang.
Mir ist klar, dass es im Prinzip immer ein Vollbackup ist. Es müssen doch aber ab dem 2. Backup keine großen Datenmengen mehr geschrieben werden, da die Chunks bereits vorhanden sind.

Hier das Log des Backups:

Code:
vzdump 103 105 107 108 109 111 112 113 114 116 117 120 122 124 --mode snapshot --mailnotification always --mailto admin@xxxx.net --storage PBS --notes-template '{{guestname}}' --quiet 1 --node pve

103: 2023-10-25 02:00:18 INFO: Starting Backup of VM 103 (lxc)
103: 2023-10-25 02:00:18 INFO: status = running
103: 2023-10-25 02:00:19 INFO: CT Name: srv
103: 2023-10-25 02:00:19 INFO: including mount point rootfs ('/') in backup
103: 2023-10-25 02:00:20 INFO: backup mode: snapshot
103: 2023-10-25 02:00:20 INFO: ionice priority: 7
103: 2023-10-25 02:00:20 INFO: create storage snapshot 'vzdump'
103: 2023-10-25 02:00:24 INFO: creating Proxmox Backup Server archive 'ct/103/2023-10-25T00:00:17Z'
103: 2023-10-25 02:00:25 INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp30380_103/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 103 --backup-time 1698192017 --repository root@pam@pbs.xxxx.net:store1
103: 2023-10-25 02:00:25 INFO: Starting backup: ct/103/2023-10-25T00:00:17Z
103: 2023-10-25 02:00:25 INFO: Client name: pve
103: 2023-10-25 02:00:25 INFO: Starting backup protocol: Wed Oct 25 02:00:25 2023
103: 2023-10-25 02:00:25 INFO: Downloading previous manifest (Tue Oct 24 02:00:14 2023)
103: 2023-10-25 02:00:25 INFO: Upload config file '/var/tmp/vzdumptmp30380_103/etc/vzdump/pct.conf' to 'root@pam@pbs.xxxx.net:8007:store1' as pct.conf.blob
103: 2023-10-25 02:00:25 INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@pbs.xxxx.net:8007:store1' as root.pxar.didx
103: 2023-10-25 04:19:08 INFO: root.pxar: had to backup 276.134 MiB of 402.897 GiB (compressed 175.374 MiB) in 8323.18s
103: 2023-10-25 04:19:08 INFO: root.pxar: average backup speed: 33.973 KiB/s
103: 2023-10-25 04:19:08 INFO: root.pxar: backup was done incrementally, reused 402.627 GiB (99.9%)
103: 2023-10-25 04:19:08 INFO: Uploaded backup catalog (6.498 MiB)
103: 2023-10-25 04:19:09 INFO: Duration: 8324.02s
103: 2023-10-25 04:19:09 INFO: End Time: Wed Oct 25 04:19:09 2023
103: 2023-10-25 04:19:09 INFO: adding notes to backup
103: 2023-10-25 04:19:10 INFO: cleanup temporary 'vzdump' snapshot
103: 2023-10-25 04:19:13 INFO: Finished Backup of VM 103 (02:18:56)

105: 2023-10-25 04:19:13 INFO: Starting Backup of VM 105 (lxc)
105: 2023-10-25 04:19:13 INFO: status = running
105: 2023-10-25 04:19:13 INFO: CT Name: web01
105: 2023-10-25 04:19:13 INFO: including mount point rootfs ('/') in backup
105: 2023-10-25 04:19:13 INFO: backup mode: snapshot
105: 2023-10-25 04:19:13 INFO: ionice priority: 7
105: 2023-10-25 04:19:13 INFO: create storage snapshot 'vzdump'
105: 2023-10-25 04:19:15 INFO: creating Proxmox Backup Server archive 'ct/105/2023-10-25T02:19:13Z'
105: 2023-10-25 04:19:15 INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp30380_105/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 105 --backup-time 1698200353 --repository root@pam@pbs.xxxx.net:store1
105: 2023-10-25 04:19:15 INFO: Starting backup: ct/105/2023-10-25T02:19:13Z
105: 2023-10-25 04:19:15 INFO: Client name: pve
105: 2023-10-25 04:19:15 INFO: Starting backup protocol: Wed Oct 25 04:19:15 2023
105: 2023-10-25 04:19:15 INFO: Downloading previous manifest (Tue Oct 24 04:26:08 2023)
105: 2023-10-25 04:19:15 INFO: Upload config file '/var/tmp/vzdumptmp30380_105/etc/vzdump/pct.conf' to 'root@pam@pbs.xxxx.net:8007:store1' as pct.conf.blob
105: 2023-10-25 04:19:15 INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@pbs.xxxx.net:8007:store1' as root.pxar.didx
105: 2023-10-25 06:17:58 INFO: root.pxar: had to backup 407.509 MiB of 268.999 GiB (compressed 121.187 MiB) in 7122.64s
105: 2023-10-25 06:17:58 INFO: root.pxar: average backup speed: 58.586 KiB/s
105: 2023-10-25 06:17:58 INFO: root.pxar: backup was done incrementally, reused 268.601 GiB (99.9%)
105: 2023-10-25 06:17:58 INFO: Uploaded backup catalog (7.657 MiB)
105: 2023-10-25 06:17:58 INFO: Duration: 7123.08s
105: 2023-10-25 06:17:58 INFO: End Time: Wed Oct 25 06:17:58 2023
105: 2023-10-25 06:17:58 INFO: adding notes to backup
105: 2023-10-25 06:17:58 INFO: cleanup temporary 'vzdump' snapshot
105: 2023-10-25 06:18:03 INFO: Finished Backup of VM 105 (01:58:50)

Die CPU-Auslastung und IO-Verzögerung sind dabei sehr hoch

pve_server_auslastung_bei_backup.png

Auf dem PBS Container sieht es so aus

pbs_server_auslastung_bei_backup_1.png

Liegt das an der recht schwachen CPU (Core I3-4130T) oder ist die Festplatte der Flaschenhals.

Wie könnte ich die Backupzeiten verringern?
 
Last edited:
Mein Setup sieht folgendermaßen aus:

Eine WD Red Plus 4TB SATA mit dem PVE
In local-lvm liegen 2 LXC Container
Fileserver mit ca. 400GB Daten​
Nextcloud mit ca. 270GB Daten​

Zwei gespiegelte Samsung SSD PM893 480GB SATA
Darauf liegen 16 'kleinere' Container, die selbst keine großen Datenmengen erzeugen
PBS, DHCP, DNS, Proxy, Mail, Datenbank, Keycloak…​

Eine weitere WD Red Plus 4TB SATA dient als Backup-Platte. Sie ist an den PBS Container per BindMount durchgereicht.


Das Backup der 400GB des Fileservers dauert ca. 2, 5 Stunden, das der 270GB der Nextcloud ca. 2 Stunden, obwohl es keine großen Veränderungen der Daten zum vorherigen Backup gibt. Das erscheint mir doch ziemlich lang.
Mir ist klar, dass es im Prinzip immer ein Vollbackup ist. Es müssen doch aber ab dem 2. Backup keine großen Datenmengen mehr geschrieben werden, da die Chunks bereits vorhanden sind.

Hier das Log des Backups:

Code:
vzdump 103 105 107 108 109 111 112 113 114 116 117 120 122 124 --mode snapshot --mailnotification always --mailto admin@xxxx.net --storage PBS --notes-template '{{guestname}}' --quiet 1 --node pve

103: 2023-10-25 02:00:18 INFO: Starting Backup of VM 103 (lxc)
103: 2023-10-25 02:00:18 INFO: status = running
103: 2023-10-25 02:00:19 INFO: CT Name: srv
103: 2023-10-25 02:00:19 INFO: including mount point rootfs ('/') in backup
103: 2023-10-25 02:00:20 INFO: backup mode: snapshot
103: 2023-10-25 02:00:20 INFO: ionice priority: 7
103: 2023-10-25 02:00:20 INFO: create storage snapshot 'vzdump'
103: 2023-10-25 02:00:24 INFO: creating Proxmox Backup Server archive 'ct/103/2023-10-25T00:00:17Z'
103: 2023-10-25 02:00:25 INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp30380_103/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 103 --backup-time 1698192017 --repository root@pam@pbs.xxxx.net:store1
103: 2023-10-25 02:00:25 INFO: Starting backup: ct/103/2023-10-25T00:00:17Z
103: 2023-10-25 02:00:25 INFO: Client name: pve
103: 2023-10-25 02:00:25 INFO: Starting backup protocol: Wed Oct 25 02:00:25 2023
103: 2023-10-25 02:00:25 INFO: Downloading previous manifest (Tue Oct 24 02:00:14 2023)
103: 2023-10-25 02:00:25 INFO: Upload config file '/var/tmp/vzdumptmp30380_103/etc/vzdump/pct.conf' to 'root@pam@pbs.xxxx.net:8007:store1' as pct.conf.blob
103: 2023-10-25 02:00:25 INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@pbs.xxxx.net:8007:store1' as root.pxar.didx
103: 2023-10-25 04:19:08 INFO: root.pxar: had to backup 276.134 MiB of 402.897 GiB (compressed 175.374 MiB) in 8323.18s
103: 2023-10-25 04:19:08 INFO: root.pxar: average backup speed: 33.973 KiB/s
103: 2023-10-25 04:19:08 INFO: root.pxar: backup was done incrementally, reused 402.627 GiB (99.9%)
103: 2023-10-25 04:19:08 INFO: Uploaded backup catalog (6.498 MiB)
103: 2023-10-25 04:19:09 INFO: Duration: 8324.02s
103: 2023-10-25 04:19:09 INFO: End Time: Wed Oct 25 04:19:09 2023
103: 2023-10-25 04:19:09 INFO: adding notes to backup
103: 2023-10-25 04:19:10 INFO: cleanup temporary 'vzdump' snapshot
103: 2023-10-25 04:19:13 INFO: Finished Backup of VM 103 (02:18:56)

105: 2023-10-25 04:19:13 INFO: Starting Backup of VM 105 (lxc)
105: 2023-10-25 04:19:13 INFO: status = running
105: 2023-10-25 04:19:13 INFO: CT Name: web01
105: 2023-10-25 04:19:13 INFO: including mount point rootfs ('/') in backup
105: 2023-10-25 04:19:13 INFO: backup mode: snapshot
105: 2023-10-25 04:19:13 INFO: ionice priority: 7
105: 2023-10-25 04:19:13 INFO: create storage snapshot 'vzdump'
105: 2023-10-25 04:19:15 INFO: creating Proxmox Backup Server archive 'ct/105/2023-10-25T02:19:13Z'
105: 2023-10-25 04:19:15 INFO: run: /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp30380_105/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 105 --backup-time 1698200353 --repository root@pam@pbs.xxxx.net:store1
105: 2023-10-25 04:19:15 INFO: Starting backup: ct/105/2023-10-25T02:19:13Z
105: 2023-10-25 04:19:15 INFO: Client name: pve
105: 2023-10-25 04:19:15 INFO: Starting backup protocol: Wed Oct 25 04:19:15 2023
105: 2023-10-25 04:19:15 INFO: Downloading previous manifest (Tue Oct 24 04:26:08 2023)
105: 2023-10-25 04:19:15 INFO: Upload config file '/var/tmp/vzdumptmp30380_105/etc/vzdump/pct.conf' to 'root@pam@pbs.xxxx.net:8007:store1' as pct.conf.blob
105: 2023-10-25 04:19:15 INFO: Upload directory '/mnt/vzsnap0' to 'root@pam@pbs.xxxx.net:8007:store1' as root.pxar.didx
105: 2023-10-25 06:17:58 INFO: root.pxar: had to backup 407.509 MiB of 268.999 GiB (compressed 121.187 MiB) in 7122.64s
105: 2023-10-25 06:17:58 INFO: root.pxar: average backup speed: 58.586 KiB/s
105: 2023-10-25 06:17:58 INFO: root.pxar: backup was done incrementally, reused 268.601 GiB (99.9%)
105: 2023-10-25 06:17:58 INFO: Uploaded backup catalog (7.657 MiB)
105: 2023-10-25 06:17:58 INFO: Duration: 7123.08s
105: 2023-10-25 06:17:58 INFO: End Time: Wed Oct 25 06:17:58 2023
105: 2023-10-25 06:17:58 INFO: adding notes to backup
105: 2023-10-25 06:17:58 INFO: cleanup temporary 'vzdump' snapshot
105: 2023-10-25 06:18:03 INFO: Finished Backup of VM 105 (01:58:50)

Die CPU-Auslastung und IO-Verzögerung sind dabei sehr hoch

View attachment 57100

Auf dem PBS Container sieht es so aus

View attachment 57101

Liegt das an der recht schwachen CPU (Core I3-4130T) oder ist die Festplatte der Flaschenhals.

Wie könnte ich die Backupzeiten verringern?
Hallo,
dein Falschenhals ist vermutlich die IO beim Lesen. Es stimmt zwar, dass wenn sich wenig Daten geändert haben diese nicht an den PBS gesendet und dort geschrieben werden müssen, jedoch muss der gesamte Inhalt gelesen und gechunked werden. Es gibt Bestrebungen dies in Zukunft zu optimieren, siehe https://lists.proxmox.com/pipermail/pbs-devel/2023-October/006800.html

Bis es jedoch soweit ist, ist ein Setup mit VMs, welche dirty-bitmap tracking machen für die Backup Performance zu PBS targets von Vorteil.
 
Ich wollte die Daten aus den beiden Containern mit Fileserver und Nextcloud sowieso auf SSDs auslagern wegen Trennung Container/Daten.
Würde das die Dauer der Backups deutlich verringern?
 
Ich wollte die Daten aus den beiden Containern mit Fileserver und Nextcloud sowieso auf SSDs auslagern wegen Trennung Container/Daten.
Würde das die Dauer der Backups deutlich verringern?
Ein Auslagern des LXC mountpoints mit Daten auf eine schnelle storage würde das Backup vermutlich beschleunigen, da vernünftige NVMe SSDs bessere IOPS und Datenraten bieten.
 
@Chris
Ich würde auch generell eine Beschleunigung des Vorgangs begrüßen. Da ich viele Daten auf "Spinning Rust" liegen habe, da der einfach €/TB weitaus günstiger ist als NAND, bin ich mittlerweile auf SAS statt SATA Platten umgestiegen und habe mir auch NVMe SSDs als "special device" im Pool ergänzt.

Von diesen Verbesserungen merke ich leider nichts. Ein initialer Datenabgleich von bereits gebackupten Daten (~ 6 TB) dauert immer noch einige Stunden und wirkt sich dadurch auf den Gesamt-IO des Servers extrem negativ aus (IO Delay zwischen 60 - 70 %).
 
@Chris Gibt's die Möglichkeit, diesen Patch zu nutzen - zB in einem PBS Test Repository?
Ich habe heute auch mal nachgeprüft, wie lange so ein Job bei mir dauert: 6,25 h bei einem durchschnittlichen IO-Delay von 70 %:

Code:
2023-11-04 14:55:50 INFO: Starting Backup of VM 109 (qemu)
2023-11-04 14:55:50 INFO: status = running
2023-11-04 14:55:50 INFO: VM Name: WindowsServer
2023-11-04 14:55:50 INFO: include disk 'scsi0' 'Micron_7400_21463341AA0D-64k:vm-109-disk-0' 200G
2023-11-04 14:55:50 INFO: include disk 'scsi1' 'Data:vm-109-disk-0' 2T
2023-11-04 14:55:50 INFO: exclude disk 'scsi2' 'Media:vm-109-disk-0' (backup=no)
2023-11-04 14:55:50 INFO: exclude disk 'scsi3' 'Data:vm-109-disk-3' (backup=no)
2023-11-04 14:55:50 INFO: include disk 'scsi4' 'Data:vm-109-disk-1' 4T
2023-11-04 14:55:50 INFO: exclude disk 'scsi5' 'Media:vm-109-disk-1' (backup=no)
2023-11-04 14:55:50 INFO: include disk 'scsi6' 'Data:vm-109-disk-4' 500G
2023-11-04 14:55:50 INFO: include disk 'efidisk0' 'Micron_7400_21463341AA0D-64k:vm-109-disk-1' 1M
2023-11-04 14:55:50 INFO: backup mode: snapshot
2023-11-04 14:55:50 INFO: ionice priority: 7
2023-11-04 14:55:50 INFO: snapshots found (not included into backup)
2023-11-04 14:55:50 INFO: creating Proxmox Backup Server archive 'vm/109/2023-11-04T13:55:50Z'
2023-11-04 14:55:50 INFO: issuing guest-agent 'fs-freeze' command
2023-11-04 14:55:59 INFO: issuing guest-agent 'fs-thaw' command
2023-11-04 14:56:03 INFO: started backup task '53c7234f-c1be-4edf-8c5c-2a9dd6fceff0'
2023-11-04 14:56:03 INFO: resuming VM again
2023-11-04 14:56:03 INFO: efidisk0: dirty-bitmap status: created new
2023-11-04 14:56:03 INFO: scsi0: dirty-bitmap status: created new
2023-11-04 14:56:03 INFO: scsi1: dirty-bitmap status: created new
2023-11-04 14:56:03 INFO: scsi4: dirty-bitmap status: created new
2023-11-04 14:56:03 INFO: scsi6: dirty-bitmap status: created new
2023-11-04 14:56:06 INFO:   0% (1.5 GiB of 6.7 TiB) in 3s, read: 496.0 MiB/s, write: 21.3 MiB/s
2023-11-04 14:59:55 INFO:   1% (68.5 GiB of 6.7 TiB) in 3m 52s, read: 299.7 MiB/s, write: 7.9 MiB/s
2023-11-04 15:04:01 INFO:   2% (137.0 GiB of 6.7 TiB) in 7m 58s, read: 285.3 MiB/s, write: 4.0 MiB/s
2023-11-04 15:05:32 INFO:   3% (206.3 GiB of 6.7 TiB) in 9m 29s, read: 779.3 MiB/s, write: 2.7 MiB/s
2023-11-04 15:06:36 INFO:   4% (274.3 GiB of 6.7 TiB) in 10m 33s, read: 1.1 GiB/s, write: 384.0 KiB/s
2023-11-04 15:08:25 INFO:   5% (343.5 GiB of 6.7 TiB) in 12m 22s, read: 650.3 MiB/s, write: 3.0 MiB/s
2023-11-04 15:09:52 INFO:   6% (410.7 GiB of 6.7 TiB) in 13m 49s, read: 790.8 MiB/s, write: 4.4 MiB/s
2023-11-04 15:11:35 INFO:   7% (481.0 GiB of 6.7 TiB) in 15m 32s, read: 699.1 MiB/s, write: 1.9 MiB/s
2023-11-04 15:14:47 INFO:   8% (547.8 GiB of 6.7 TiB) in 18m 44s, read: 355.9 MiB/s, write: 1.7 MiB/s
2023-11-04 15:19:22 INFO:   9% (616.1 GiB of 6.7 TiB) in 23m 19s, read: 254.4 MiB/s, write: 1.2 MiB/s
2023-11-04 15:23:51 INFO:  10% (684.5 GiB of 6.7 TiB) in 27m 48s, read: 260.6 MiB/s, write: 426.3 KiB/s
2023-11-04 15:28:22 INFO:  11% (752.9 GiB of 6.7 TiB) in 32m 19s, read: 258.2 MiB/s, write: 256.9 KiB/s
2023-11-04 15:32:50 INFO:  12% (821.4 GiB of 6.7 TiB) in 36m 47s, read: 261.7 MiB/s, write: 259.8 KiB/s
2023-11-04 15:37:24 INFO:  13% (889.9 GiB of 6.7 TiB) in 41m 21s, read: 256.0 MiB/s, write: 239.2 KiB/s
2023-11-04 15:41:59 INFO:  14% (958.4 GiB of 6.7 TiB) in 45m 56s, read: 255.3 MiB/s, write: 5.5 MiB/s
2023-11-04 15:46:29 INFO:  15% (1.0 TiB of 6.7 TiB) in 50m 26s, read: 259.1 MiB/s, write: 652.3 KiB/s
2023-11-04 15:51:25 INFO:  16% (1.1 TiB of 6.7 TiB) in 55m 22s, read: 237.1 MiB/s, write: 442.8 KiB/s
2023-11-04 15:56:06 INFO:  17% (1.1 TiB of 6.7 TiB) in 1h 3s, read: 249.3 MiB/s, write: 408.1 KiB/s
2023-11-04 16:00:45 INFO:  18% (1.2 TiB of 6.7 TiB) in 1h 4m 42s, read: 250.8 MiB/s, write: 1.1 MiB/s
2023-11-04 16:05:04 INFO:  19% (1.3 TiB of 6.7 TiB) in 1h 9m 1s, read: 270.7 MiB/s, write: 2.6 MiB/s
2023-11-04 16:09:17 INFO:  20% (1.3 TiB of 6.7 TiB) in 1h 13m 14s, read: 277.4 MiB/s, write: 728.5 KiB/s
2023-11-04 16:13:41 INFO:  21% (1.4 TiB of 6.7 TiB) in 1h 17m 38s, read: 264.8 MiB/s, write: 1.5 MiB/s
2023-11-04 16:18:01 INFO:  22% (1.5 TiB of 6.7 TiB) in 1h 21m 58s, read: 270.3 MiB/s, write: 456.9 KiB/s
2023-11-04 16:22:28 INFO:  23% (1.5 TiB of 6.7 TiB) in 1h 26m 25s, read: 262.1 MiB/s, write: 276.1 KiB/s
2023-11-04 16:27:05 INFO:  24% (1.6 TiB of 6.7 TiB) in 1h 31m 2s, read: 253.3 MiB/s, write: 354.9 KiB/s
2023-11-04 16:31:41 INFO:  25% (1.7 TiB of 6.7 TiB) in 1h 35m 38s, read: 253.9 MiB/s, write: 311.7 KiB/s
2023-11-04 16:36:19 INFO:  26% (1.7 TiB of 6.7 TiB) in 1h 40m 16s, read: 252.3 MiB/s, write: 795.6 KiB/s
2023-11-04 16:41:06 INFO:  27% (1.8 TiB of 6.7 TiB) in 1h 45m 3s, read: 244.0 MiB/s, write: 4.4 MiB/s
2023-11-04 16:45:50 INFO:  28% (1.9 TiB of 6.7 TiB) in 1h 49m 47s, read: 246.3 MiB/s, write: 605.7 KiB/s
2023-11-04 16:50:38 INFO:  29% (1.9 TiB of 6.7 TiB) in 1h 54m 35s, read: 243.3 MiB/s, write: 184.9 KiB/s
2023-11-04 16:55:54 INFO:  30% (2.0 TiB of 6.7 TiB) in 1h 59m 51s, read: 222.3 MiB/s, write: 233.3 KiB/s
2023-11-04 17:01:13 INFO:  31% (2.1 TiB of 6.7 TiB) in 2h 5m 10s, read: 219.2 MiB/s, write: 231.1 KiB/s
2023-11-04 17:06:35 INFO:  32% (2.1 TiB of 6.7 TiB) in 2h 10m 32s, read: 217.9 MiB/s, write: 63.6 KiB/s
2023-11-04 17:11:59 INFO:  33% (2.2 TiB of 6.7 TiB) in 2h 15m 56s, read: 215.9 MiB/s, write: 177.0 KiB/s
2023-11-04 17:17:38 INFO:  34% (2.3 TiB of 6.7 TiB) in 2h 21m 35s, read: 207.0 MiB/s, write: 229.6 KiB/s
2023-11-04 17:23:18 INFO:  35% (2.3 TiB of 6.7 TiB) in 2h 27m 15s, read: 205.8 MiB/s, write: 144.6 KiB/s
2023-11-04 17:29:07 INFO:  36% (2.4 TiB of 6.7 TiB) in 2h 33m 4s, read: 201.3 MiB/s, write: 152.6 KiB/s
2023-11-04 17:35:01 INFO:  37% (2.5 TiB of 6.7 TiB) in 2h 38m 58s, read: 197.8 MiB/s, write: 150.4 KiB/s
2023-11-04 17:40:58 INFO:  38% (2.5 TiB of 6.7 TiB) in 2h 44m 55s, read: 196.4 MiB/s, write: 195.0 KiB/s
2023-11-04 17:47:03 INFO:  39% (2.6 TiB of 6.7 TiB) in 2h 51m, read: 192.2 MiB/s, write: 325.4 KiB/s
2023-11-04 17:53:12 INFO:  40% (2.7 TiB of 6.7 TiB) in 2h 57m 9s, read: 189.8 MiB/s, write: 144.3 KiB/s
2023-11-04 17:57:50 INFO:  41% (2.7 TiB of 6.7 TiB) in 3h 1m 47s, read: 251.7 MiB/s, write: 117.9 KiB/s
2023-11-04 18:02:30 INFO:  42% (2.8 TiB of 6.7 TiB) in 3h 6m 27s, read: 250.9 MiB/s, write: 160.9 KiB/s
2023-11-04 18:06:54 INFO:  43% (2.9 TiB of 6.7 TiB) in 3h 10m 51s, read: 265.0 MiB/s, write: 589.6 KiB/s
2023-11-04 18:11:25 INFO:  44% (2.9 TiB of 6.7 TiB) in 3h 15m 22s, read: 259.0 MiB/s, write: 60.5 KiB/s
2023-11-04 18:16:21 INFO:  45% (3.0 TiB of 6.7 TiB) in 3h 20m 18s, read: 236.6 MiB/s, write: 27.7 KiB/s
2023-11-04 18:20:58 INFO:  46% (3.1 TiB of 6.7 TiB) in 3h 24m 55s, read: 253.4 MiB/s, write: 532.3 KiB/s
2023-11-04 18:25:17 INFO:  47% (3.1 TiB of 6.7 TiB) in 3h 29m 14s, read: 270.2 MiB/s, write: 15.8 KiB/s
2023-11-04 18:29:36 INFO:  48% (3.2 TiB of 6.7 TiB) in 3h 33m 33s, read: 271.1 MiB/s, write: 126.5 KiB/s
2023-11-04 18:33:57 INFO:  49% (3.3 TiB of 6.7 TiB) in 3h 37m 54s, read: 268.2 MiB/s, write: 31.4 KiB/s
2023-11-04 18:38:14 INFO:  50% (3.3 TiB of 6.7 TiB) in 3h 42m 11s, read: 272.0 MiB/s, write: 0 B/s
2023-11-04 18:42:28 INFO:  51% (3.4 TiB of 6.7 TiB) in 3h 46m 25s, read: 276.1 MiB/s, write: 322.5 KiB/s
2023-11-04 18:46:43 INFO:  52% (3.5 TiB of 6.7 TiB) in 3h 50m 40s, read: 275.4 MiB/s, write: 48.2 KiB/s
2023-11-04 18:50:56 INFO:  53% (3.5 TiB of 6.7 TiB) in 3h 54m 53s, read: 276.8 MiB/s, write: 16.2 KiB/s
2023-11-04 18:55:17 INFO:  54% (3.6 TiB of 6.7 TiB) in 3h 59m 14s, read: 268.5 MiB/s, write: 94.2 KiB/s
2023-11-04 18:59:42 INFO:  55% (3.7 TiB of 6.7 TiB) in 4h 3m 39s, read: 264.7 MiB/s, write: 15.5 KiB/s
2023-11-04 19:04:10 INFO:  56% (3.7 TiB of 6.7 TiB) in 4h 8m 7s, read: 261.6 MiB/s, write: 30.6 KiB/s
2023-11-04 19:08:38 INFO:  57% (3.8 TiB of 6.7 TiB) in 4h 12m 35s, read: 261.2 MiB/s, write: 45.9 KiB/s
2023-11-04 19:13:08 INFO:  58% (3.9 TiB of 6.7 TiB) in 4h 17m 5s, read: 259.1 MiB/s, write: 121.4 KiB/s
2023-11-04 19:17:28 INFO:  59% (3.9 TiB of 6.7 TiB) in 4h 21m 25s, read: 270.4 MiB/s, write: 47.3 KiB/s
2023-11-04 19:21:47 INFO:  60% (4.0 TiB of 6.7 TiB) in 4h 25m 44s, read: 270.3 MiB/s, write: 63.3 KiB/s
2023-11-04 19:26:02 INFO:  61% (4.1 TiB of 6.7 TiB) in 4h 29m 59s, read: 275.1 MiB/s, write: 16.1 KiB/s
2023-11-04 19:30:14 INFO:  62% (4.1 TiB of 6.7 TiB) in 4h 34m 11s, read: 277.8 MiB/s, write: 0 B/s
2023-11-04 19:34:23 INFO:  63% (4.2 TiB of 6.7 TiB) in 4h 38m 20s, read: 281.5 MiB/s, write: 49.3 KiB/s
2023-11-04 19:38:33 INFO:  64% (4.3 TiB of 6.7 TiB) in 4h 42m 30s, read: 279.9 MiB/s, write: 131.1 KiB/s
2023-11-04 19:42:44 INFO:  65% (4.3 TiB of 6.7 TiB) in 4h 46m 41s, read: 279.3 MiB/s, write: 49.0 KiB/s
2023-11-04 19:46:59 INFO:  66% (4.4 TiB of 6.7 TiB) in 4h 50m 56s, read: 275.3 MiB/s, write: 64.3 KiB/s
2023-11-04 19:51:12 INFO:  67% (4.5 TiB of 6.7 TiB) in 4h 55m 9s, read: 276.2 MiB/s, write: 793.3 KiB/s
2023-11-04 19:55:29 INFO:  68% (4.5 TiB of 6.7 TiB) in 4h 59m 26s, read: 273.5 MiB/s, write: 270.9 KiB/s
2023-11-04 19:59:46 INFO:  69% (4.6 TiB of 6.7 TiB) in 5h 3m 43s, read: 272.6 MiB/s, write: 223.1 KiB/s
2023-11-04 20:04:04 INFO:  70% (4.7 TiB of 6.7 TiB) in 5h 8m 1s, read: 272.0 MiB/s, write: 142.9 KiB/s
2023-11-04 20:08:19 INFO:  71% (4.7 TiB of 6.7 TiB) in 5h 12m 16s, read: 274.7 MiB/s, write: 128.5 KiB/s
2023-11-04 20:12:32 INFO:  72% (4.8 TiB of 6.7 TiB) in 5h 16m 29s, read: 276.9 MiB/s, write: 760.9 KiB/s
2023-11-04 20:16:52 INFO:  73% (4.9 TiB of 6.7 TiB) in 5h 20m 49s, read: 269.6 MiB/s, write: 961.0 KiB/s
2023-11-04 20:21:09 INFO:  74% (4.9 TiB of 6.7 TiB) in 5h 25m 6s, read: 272.1 MiB/s, write: 3.5 MiB/s
2023-11-04 20:25:29 INFO:  75% (5.0 TiB of 6.7 TiB) in 5h 29m 26s, read: 269.8 MiB/s, write: 2.0 MiB/s
2023-11-04 20:29:47 INFO:  76% (5.1 TiB of 6.7 TiB) in 5h 33m 44s, read: 271.5 MiB/s, write: 1.8 MiB/s
2023-11-04 20:33:58 INFO:  77% (5.1 TiB of 6.7 TiB) in 5h 37m 55s, read: 279.0 MiB/s, write: 277.4 KiB/s
2023-11-04 20:38:04 INFO:  78% (5.2 TiB of 6.7 TiB) in 5h 42m 1s, read: 285.7 MiB/s, write: 33.3 KiB/s
2023-11-04 20:42:10 INFO:  79% (5.3 TiB of 6.7 TiB) in 5h 46m 7s, read: 284.8 MiB/s, write: 899.1 KiB/s
2023-11-04 20:46:17 INFO:  80% (5.3 TiB of 6.7 TiB) in 5h 50m 14s, read: 283.0 MiB/s, write: 1.4 MiB/s
2023-11-04 20:50:26 INFO:  81% (5.4 TiB of 6.7 TiB) in 5h 54m 23s, read: 282.3 MiB/s, write: 312.5 KiB/s
2023-11-04 20:54:25 INFO:  82% (5.5 TiB of 6.7 TiB) in 5h 58m 22s, read: 293.2 MiB/s, write: 68.6 KiB/s
2023-11-04 20:58:19 INFO:  83% (5.5 TiB of 6.7 TiB) in 6h 2m 16s, read: 299.1 MiB/s, write: 87.5 KiB/s
2023-11-04 20:59:56 INFO:  84% (5.6 TiB of 6.7 TiB) in 6h 3m 53s, read: 744.7 MiB/s, write: 84.5 KiB/s
2023-11-04 21:00:17 INFO:  85% (5.7 TiB of 6.7 TiB) in 6h 4m 14s, read: 3.2 GiB/s, write: 1.9 MiB/s
2023-11-04 21:00:37 INFO:  86% (5.7 TiB of 6.7 TiB) in 6h 4m 34s, read: 3.4 GiB/s, write: 204.8 KiB/s
2023-11-04 21:00:56 INFO:  87% (5.8 TiB of 6.7 TiB) in 6h 4m 53s, read: 3.5 GiB/s, write: 0 B/s
2023-11-04 21:01:14 INFO:  88% (5.9 TiB of 6.7 TiB) in 6h 5m 11s, read: 4.0 GiB/s, write: 0 B/s
2023-11-04 21:01:31 INFO:  89% (6.0 TiB of 6.7 TiB) in 6h 5m 28s, read: 3.9 GiB/s, write: 0 B/s
2023-11-04 21:01:48 INFO:  90% (6.0 TiB of 6.7 TiB) in 6h 5m 45s, read: 4.0 GiB/s, write: 0 B/s
2023-11-04 21:02:08 INFO:  91% (6.1 TiB of 6.7 TiB) in 6h 6m 5s, read: 3.4 GiB/s, write: 0 B/s
2023-11-04 21:02:27 INFO:  92% (6.2 TiB of 6.7 TiB) in 6h 6m 24s, read: 3.7 GiB/s, write: 215.6 KiB/s
2023-11-04 21:02:44 INFO:  93% (6.2 TiB of 6.7 TiB) in 6h 6m 41s, read: 4.0 GiB/s, write: 0 B/s
2023-11-04 21:03:01 INFO:  94% (6.3 TiB of 6.7 TiB) in 6h 6m 58s, read: 4.0 GiB/s, write: 240.9 KiB/s
2023-11-04 21:03:18 INFO:  95% (6.4 TiB of 6.7 TiB) in 6h 7m 15s, read: 4.2 GiB/s, write: 0 B/s
2023-11-04 21:03:34 INFO:  96% (6.4 TiB of 6.7 TiB) in 6h 7m 31s, read: 4.3 GiB/s, write: 0 B/s
2023-11-04 21:03:49 INFO:  97% (6.5 TiB of 6.7 TiB) in 6h 7m 46s, read: 4.4 GiB/s, write: 0 B/s
2023-11-04 21:07:02 INFO:  98% (6.6 TiB of 6.7 TiB) in 6h 10m 59s, read: 357.5 MiB/s, write: 64.9 MiB/s
2023-11-04 21:09:52 INFO:  99% (6.6 TiB of 6.7 TiB) in 6h 13m 49s, read: 410.9 MiB/s, write: 36.7 MiB/s
2023-11-04 21:11:24 INFO: 100% (6.7 TiB of 6.7 TiB) in 6h 15m 21s, read: 761.7 MiB/s, write: 8.0 MiB/s
2023-11-04 21:11:25 INFO: backup is sparse: 3.27 TiB (48%) total zero data
2023-11-04 21:11:25 INFO: backup was done incrementally, reused 6.65 TiB (99%)
2023-11-04 21:11:25 INFO: transferred 6.68 TiB in 22522 seconds (311.2 MiB/s)
2023-11-04 21:11:25 INFO: adding notes to backup
2023-11-04 21:11:25 INFO: Finished Backup of VM 109 (06:15:35)
 
Last edited:
@Chris
Ich würde auch generell eine Beschleunigung des Vorgangs begrüßen. Da ich viele Daten auf "Spinning Rust" liegen habe, da der einfach €/TB weitaus günstiger ist als NAND, bin ich mittlerweile auf SAS statt SATA Platten umgestiegen und habe mir auch NVMe SSDs als "special device" im Pool ergänzt.

Von diesen Verbesserungen merke ich leider nichts. Ein initialer Datenabgleich von bereits gebackupten Daten (~ 6 TB) dauert immer noch einige Stunden und wirkt sich dadurch auf den Gesamt-IO des Servers extrem negativ aus (IO Delay zwischen 60 - 70 %).
Reden wir hier von VM oder LXC backups? Für die LXC Backups gillt: Wenn die Daten weiterhin auf den HDDs liegen, liefert auch das special device kaum Vorteile, der PBS client muss ja dennoch die gesamten Daten lesen.
 
@Chris Gibt's die Möglichkeit, diesen Patch zu nutzen - zB in einem PBS Test Repository?
Ich habe heute auch mal nachgeprüft, wie lange so ein Job bei mir dauert: 6,25 h bei einem durchschnittlichen IO-Delay von 70 %:

Code:
2023-11-04 14:55:50 INFO: Starting Backup of VM 109 (qemu)
2023-11-04 14:55:50 INFO: status = running
2023-11-04 14:55:50 INFO: VM Name: WindowsServer
2023-11-04 14:55:50 INFO: include disk 'scsi0' 'Micron_7400_21463341AA0D-64k:vm-109-disk-0' 200G
2023-11-04 14:55:50 INFO: include disk 'scsi1' 'Data:vm-109-disk-0' 2T
2023-11-04 14:55:50 INFO: exclude disk 'scsi2' 'Media:vm-109-disk-0' (backup=no)
2023-11-04 14:55:50 INFO: exclude disk 'scsi3' 'Data:vm-109-disk-3' (backup=no)
2023-11-04 14:55:50 INFO: include disk 'scsi4' 'Data:vm-109-disk-1' 4T
2023-11-04 14:55:50 INFO: exclude disk 'scsi5' 'Media:vm-109-disk-1' (backup=no)
2023-11-04 14:55:50 INFO: include disk 'scsi6' 'Data:vm-109-disk-4' 500G
2023-11-04 14:55:50 INFO: include disk 'efidisk0' 'Micron_7400_21463341AA0D-64k:vm-109-disk-1' 1M
2023-11-04 14:55:50 INFO: backup mode: snapshot
2023-11-04 14:55:50 INFO: ionice priority: 7
2023-11-04 14:55:50 INFO: snapshots found (not included into backup)
2023-11-04 14:55:50 INFO: creating Proxmox Backup Server archive 'vm/109/2023-11-04T13:55:50Z'
2023-11-04 14:55:50 INFO: issuing guest-agent 'fs-freeze' command
2023-11-04 14:55:59 INFO: issuing guest-agent 'fs-thaw' command
2023-11-04 14:56:03 INFO: started backup task '53c7234f-c1be-4edf-8c5c-2a9dd6fceff0'
2023-11-04 14:56:03 INFO: resuming VM again
2023-11-04 14:56:03 INFO: efidisk0: dirty-bitmap status: created new
2023-11-04 14:56:03 INFO: scsi0: dirty-bitmap status: created new
2023-11-04 14:56:03 INFO: scsi1: dirty-bitmap status: created new
2023-11-04 14:56:03 INFO: scsi4: dirty-bitmap status: created new
2023-11-04 14:56:03 INFO: scsi6: dirty-bitmap status: created new
2023-11-04 14:56:06 INFO:   0% (1.5 GiB of 6.7 TiB) in 3s, read: 496.0 MiB/s, write: 21.3 MiB/s
2023-11-04 14:59:55 INFO:   1% (68.5 GiB of 6.7 TiB) in 3m 52s, read: 299.7 MiB/s, write: 7.9 MiB/s
2023-11-04 15:04:01 INFO:   2% (137.0 GiB of 6.7 TiB) in 7m 58s, read: 285.3 MiB/s, write: 4.0 MiB/s
2023-11-04 15:05:32 INFO:   3% (206.3 GiB of 6.7 TiB) in 9m 29s, read: 779.3 MiB/s, write: 2.7 MiB/s
2023-11-04 15:06:36 INFO:   4% (274.3 GiB of 6.7 TiB) in 10m 33s, read: 1.1 GiB/s, write: 384.0 KiB/s
2023-11-04 15:08:25 INFO:   5% (343.5 GiB of 6.7 TiB) in 12m 22s, read: 650.3 MiB/s, write: 3.0 MiB/s
2023-11-04 15:09:52 INFO:   6% (410.7 GiB of 6.7 TiB) in 13m 49s, read: 790.8 MiB/s, write: 4.4 MiB/s
2023-11-04 15:11:35 INFO:   7% (481.0 GiB of 6.7 TiB) in 15m 32s, read: 699.1 MiB/s, write: 1.9 MiB/s
2023-11-04 15:14:47 INFO:   8% (547.8 GiB of 6.7 TiB) in 18m 44s, read: 355.9 MiB/s, write: 1.7 MiB/s
2023-11-04 15:19:22 INFO:   9% (616.1 GiB of 6.7 TiB) in 23m 19s, read: 254.4 MiB/s, write: 1.2 MiB/s
2023-11-04 15:23:51 INFO:  10% (684.5 GiB of 6.7 TiB) in 27m 48s, read: 260.6 MiB/s, write: 426.3 KiB/s
2023-11-04 15:28:22 INFO:  11% (752.9 GiB of 6.7 TiB) in 32m 19s, read: 258.2 MiB/s, write: 256.9 KiB/s
2023-11-04 15:32:50 INFO:  12% (821.4 GiB of 6.7 TiB) in 36m 47s, read: 261.7 MiB/s, write: 259.8 KiB/s
2023-11-04 15:37:24 INFO:  13% (889.9 GiB of 6.7 TiB) in 41m 21s, read: 256.0 MiB/s, write: 239.2 KiB/s
2023-11-04 15:41:59 INFO:  14% (958.4 GiB of 6.7 TiB) in 45m 56s, read: 255.3 MiB/s, write: 5.5 MiB/s
2023-11-04 15:46:29 INFO:  15% (1.0 TiB of 6.7 TiB) in 50m 26s, read: 259.1 MiB/s, write: 652.3 KiB/s
2023-11-04 15:51:25 INFO:  16% (1.1 TiB of 6.7 TiB) in 55m 22s, read: 237.1 MiB/s, write: 442.8 KiB/s
2023-11-04 15:56:06 INFO:  17% (1.1 TiB of 6.7 TiB) in 1h 3s, read: 249.3 MiB/s, write: 408.1 KiB/s
2023-11-04 16:00:45 INFO:  18% (1.2 TiB of 6.7 TiB) in 1h 4m 42s, read: 250.8 MiB/s, write: 1.1 MiB/s
2023-11-04 16:05:04 INFO:  19% (1.3 TiB of 6.7 TiB) in 1h 9m 1s, read: 270.7 MiB/s, write: 2.6 MiB/s
2023-11-04 16:09:17 INFO:  20% (1.3 TiB of 6.7 TiB) in 1h 13m 14s, read: 277.4 MiB/s, write: 728.5 KiB/s
2023-11-04 16:13:41 INFO:  21% (1.4 TiB of 6.7 TiB) in 1h 17m 38s, read: 264.8 MiB/s, write: 1.5 MiB/s
2023-11-04 16:18:01 INFO:  22% (1.5 TiB of 6.7 TiB) in 1h 21m 58s, read: 270.3 MiB/s, write: 456.9 KiB/s
2023-11-04 16:22:28 INFO:  23% (1.5 TiB of 6.7 TiB) in 1h 26m 25s, read: 262.1 MiB/s, write: 276.1 KiB/s
2023-11-04 16:27:05 INFO:  24% (1.6 TiB of 6.7 TiB) in 1h 31m 2s, read: 253.3 MiB/s, write: 354.9 KiB/s
2023-11-04 16:31:41 INFO:  25% (1.7 TiB of 6.7 TiB) in 1h 35m 38s, read: 253.9 MiB/s, write: 311.7 KiB/s
2023-11-04 16:36:19 INFO:  26% (1.7 TiB of 6.7 TiB) in 1h 40m 16s, read: 252.3 MiB/s, write: 795.6 KiB/s
2023-11-04 16:41:06 INFO:  27% (1.8 TiB of 6.7 TiB) in 1h 45m 3s, read: 244.0 MiB/s, write: 4.4 MiB/s
2023-11-04 16:45:50 INFO:  28% (1.9 TiB of 6.7 TiB) in 1h 49m 47s, read: 246.3 MiB/s, write: 605.7 KiB/s
2023-11-04 16:50:38 INFO:  29% (1.9 TiB of 6.7 TiB) in 1h 54m 35s, read: 243.3 MiB/s, write: 184.9 KiB/s
2023-11-04 16:55:54 INFO:  30% (2.0 TiB of 6.7 TiB) in 1h 59m 51s, read: 222.3 MiB/s, write: 233.3 KiB/s
2023-11-04 17:01:13 INFO:  31% (2.1 TiB of 6.7 TiB) in 2h 5m 10s, read: 219.2 MiB/s, write: 231.1 KiB/s
2023-11-04 17:06:35 INFO:  32% (2.1 TiB of 6.7 TiB) in 2h 10m 32s, read: 217.9 MiB/s, write: 63.6 KiB/s
2023-11-04 17:11:59 INFO:  33% (2.2 TiB of 6.7 TiB) in 2h 15m 56s, read: 215.9 MiB/s, write: 177.0 KiB/s
2023-11-04 17:17:38 INFO:  34% (2.3 TiB of 6.7 TiB) in 2h 21m 35s, read: 207.0 MiB/s, write: 229.6 KiB/s
2023-11-04 17:23:18 INFO:  35% (2.3 TiB of 6.7 TiB) in 2h 27m 15s, read: 205.8 MiB/s, write: 144.6 KiB/s
2023-11-04 17:29:07 INFO:  36% (2.4 TiB of 6.7 TiB) in 2h 33m 4s, read: 201.3 MiB/s, write: 152.6 KiB/s
2023-11-04 17:35:01 INFO:  37% (2.5 TiB of 6.7 TiB) in 2h 38m 58s, read: 197.8 MiB/s, write: 150.4 KiB/s
2023-11-04 17:40:58 INFO:  38% (2.5 TiB of 6.7 TiB) in 2h 44m 55s, read: 196.4 MiB/s, write: 195.0 KiB/s
2023-11-04 17:47:03 INFO:  39% (2.6 TiB of 6.7 TiB) in 2h 51m, read: 192.2 MiB/s, write: 325.4 KiB/s
2023-11-04 17:53:12 INFO:  40% (2.7 TiB of 6.7 TiB) in 2h 57m 9s, read: 189.8 MiB/s, write: 144.3 KiB/s
2023-11-04 17:57:50 INFO:  41% (2.7 TiB of 6.7 TiB) in 3h 1m 47s, read: 251.7 MiB/s, write: 117.9 KiB/s
2023-11-04 18:02:30 INFO:  42% (2.8 TiB of 6.7 TiB) in 3h 6m 27s, read: 250.9 MiB/s, write: 160.9 KiB/s
2023-11-04 18:06:54 INFO:  43% (2.9 TiB of 6.7 TiB) in 3h 10m 51s, read: 265.0 MiB/s, write: 589.6 KiB/s
2023-11-04 18:11:25 INFO:  44% (2.9 TiB of 6.7 TiB) in 3h 15m 22s, read: 259.0 MiB/s, write: 60.5 KiB/s
2023-11-04 18:16:21 INFO:  45% (3.0 TiB of 6.7 TiB) in 3h 20m 18s, read: 236.6 MiB/s, write: 27.7 KiB/s
2023-11-04 18:20:58 INFO:  46% (3.1 TiB of 6.7 TiB) in 3h 24m 55s, read: 253.4 MiB/s, write: 532.3 KiB/s
2023-11-04 18:25:17 INFO:  47% (3.1 TiB of 6.7 TiB) in 3h 29m 14s, read: 270.2 MiB/s, write: 15.8 KiB/s
2023-11-04 18:29:36 INFO:  48% (3.2 TiB of 6.7 TiB) in 3h 33m 33s, read: 271.1 MiB/s, write: 126.5 KiB/s
2023-11-04 18:33:57 INFO:  49% (3.3 TiB of 6.7 TiB) in 3h 37m 54s, read: 268.2 MiB/s, write: 31.4 KiB/s
2023-11-04 18:38:14 INFO:  50% (3.3 TiB of 6.7 TiB) in 3h 42m 11s, read: 272.0 MiB/s, write: 0 B/s
2023-11-04 18:42:28 INFO:  51% (3.4 TiB of 6.7 TiB) in 3h 46m 25s, read: 276.1 MiB/s, write: 322.5 KiB/s
2023-11-04 18:46:43 INFO:  52% (3.5 TiB of 6.7 TiB) in 3h 50m 40s, read: 275.4 MiB/s, write: 48.2 KiB/s
2023-11-04 18:50:56 INFO:  53% (3.5 TiB of 6.7 TiB) in 3h 54m 53s, read: 276.8 MiB/s, write: 16.2 KiB/s
2023-11-04 18:55:17 INFO:  54% (3.6 TiB of 6.7 TiB) in 3h 59m 14s, read: 268.5 MiB/s, write: 94.2 KiB/s
2023-11-04 18:59:42 INFO:  55% (3.7 TiB of 6.7 TiB) in 4h 3m 39s, read: 264.7 MiB/s, write: 15.5 KiB/s
2023-11-04 19:04:10 INFO:  56% (3.7 TiB of 6.7 TiB) in 4h 8m 7s, read: 261.6 MiB/s, write: 30.6 KiB/s
2023-11-04 19:08:38 INFO:  57% (3.8 TiB of 6.7 TiB) in 4h 12m 35s, read: 261.2 MiB/s, write: 45.9 KiB/s
2023-11-04 19:13:08 INFO:  58% (3.9 TiB of 6.7 TiB) in 4h 17m 5s, read: 259.1 MiB/s, write: 121.4 KiB/s
2023-11-04 19:17:28 INFO:  59% (3.9 TiB of 6.7 TiB) in 4h 21m 25s, read: 270.4 MiB/s, write: 47.3 KiB/s
2023-11-04 19:21:47 INFO:  60% (4.0 TiB of 6.7 TiB) in 4h 25m 44s, read: 270.3 MiB/s, write: 63.3 KiB/s
2023-11-04 19:26:02 INFO:  61% (4.1 TiB of 6.7 TiB) in 4h 29m 59s, read: 275.1 MiB/s, write: 16.1 KiB/s
2023-11-04 19:30:14 INFO:  62% (4.1 TiB of 6.7 TiB) in 4h 34m 11s, read: 277.8 MiB/s, write: 0 B/s
2023-11-04 19:34:23 INFO:  63% (4.2 TiB of 6.7 TiB) in 4h 38m 20s, read: 281.5 MiB/s, write: 49.3 KiB/s
2023-11-04 19:38:33 INFO:  64% (4.3 TiB of 6.7 TiB) in 4h 42m 30s, read: 279.9 MiB/s, write: 131.1 KiB/s
2023-11-04 19:42:44 INFO:  65% (4.3 TiB of 6.7 TiB) in 4h 46m 41s, read: 279.3 MiB/s, write: 49.0 KiB/s
2023-11-04 19:46:59 INFO:  66% (4.4 TiB of 6.7 TiB) in 4h 50m 56s, read: 275.3 MiB/s, write: 64.3 KiB/s
2023-11-04 19:51:12 INFO:  67% (4.5 TiB of 6.7 TiB) in 4h 55m 9s, read: 276.2 MiB/s, write: 793.3 KiB/s
2023-11-04 19:55:29 INFO:  68% (4.5 TiB of 6.7 TiB) in 4h 59m 26s, read: 273.5 MiB/s, write: 270.9 KiB/s
2023-11-04 19:59:46 INFO:  69% (4.6 TiB of 6.7 TiB) in 5h 3m 43s, read: 272.6 MiB/s, write: 223.1 KiB/s
2023-11-04 20:04:04 INFO:  70% (4.7 TiB of 6.7 TiB) in 5h 8m 1s, read: 272.0 MiB/s, write: 142.9 KiB/s
2023-11-04 20:08:19 INFO:  71% (4.7 TiB of 6.7 TiB) in 5h 12m 16s, read: 274.7 MiB/s, write: 128.5 KiB/s
2023-11-04 20:12:32 INFO:  72% (4.8 TiB of 6.7 TiB) in 5h 16m 29s, read: 276.9 MiB/s, write: 760.9 KiB/s
2023-11-04 20:16:52 INFO:  73% (4.9 TiB of 6.7 TiB) in 5h 20m 49s, read: 269.6 MiB/s, write: 961.0 KiB/s
2023-11-04 20:21:09 INFO:  74% (4.9 TiB of 6.7 TiB) in 5h 25m 6s, read: 272.1 MiB/s, write: 3.5 MiB/s
2023-11-04 20:25:29 INFO:  75% (5.0 TiB of 6.7 TiB) in 5h 29m 26s, read: 269.8 MiB/s, write: 2.0 MiB/s
2023-11-04 20:29:47 INFO:  76% (5.1 TiB of 6.7 TiB) in 5h 33m 44s, read: 271.5 MiB/s, write: 1.8 MiB/s
2023-11-04 20:33:58 INFO:  77% (5.1 TiB of 6.7 TiB) in 5h 37m 55s, read: 279.0 MiB/s, write: 277.4 KiB/s
2023-11-04 20:38:04 INFO:  78% (5.2 TiB of 6.7 TiB) in 5h 42m 1s, read: 285.7 MiB/s, write: 33.3 KiB/s
2023-11-04 20:42:10 INFO:  79% (5.3 TiB of 6.7 TiB) in 5h 46m 7s, read: 284.8 MiB/s, write: 899.1 KiB/s
2023-11-04 20:46:17 INFO:  80% (5.3 TiB of 6.7 TiB) in 5h 50m 14s, read: 283.0 MiB/s, write: 1.4 MiB/s
2023-11-04 20:50:26 INFO:  81% (5.4 TiB of 6.7 TiB) in 5h 54m 23s, read: 282.3 MiB/s, write: 312.5 KiB/s
2023-11-04 20:54:25 INFO:  82% (5.5 TiB of 6.7 TiB) in 5h 58m 22s, read: 293.2 MiB/s, write: 68.6 KiB/s
2023-11-04 20:58:19 INFO:  83% (5.5 TiB of 6.7 TiB) in 6h 2m 16s, read: 299.1 MiB/s, write: 87.5 KiB/s
2023-11-04 20:59:56 INFO:  84% (5.6 TiB of 6.7 TiB) in 6h 3m 53s, read: 744.7 MiB/s, write: 84.5 KiB/s
2023-11-04 21:00:17 INFO:  85% (5.7 TiB of 6.7 TiB) in 6h 4m 14s, read: 3.2 GiB/s, write: 1.9 MiB/s
2023-11-04 21:00:37 INFO:  86% (5.7 TiB of 6.7 TiB) in 6h 4m 34s, read: 3.4 GiB/s, write: 204.8 KiB/s
2023-11-04 21:00:56 INFO:  87% (5.8 TiB of 6.7 TiB) in 6h 4m 53s, read: 3.5 GiB/s, write: 0 B/s
2023-11-04 21:01:14 INFO:  88% (5.9 TiB of 6.7 TiB) in 6h 5m 11s, read: 4.0 GiB/s, write: 0 B/s
2023-11-04 21:01:31 INFO:  89% (6.0 TiB of 6.7 TiB) in 6h 5m 28s, read: 3.9 GiB/s, write: 0 B/s
2023-11-04 21:01:48 INFO:  90% (6.0 TiB of 6.7 TiB) in 6h 5m 45s, read: 4.0 GiB/s, write: 0 B/s
2023-11-04 21:02:08 INFO:  91% (6.1 TiB of 6.7 TiB) in 6h 6m 5s, read: 3.4 GiB/s, write: 0 B/s
2023-11-04 21:02:27 INFO:  92% (6.2 TiB of 6.7 TiB) in 6h 6m 24s, read: 3.7 GiB/s, write: 215.6 KiB/s
2023-11-04 21:02:44 INFO:  93% (6.2 TiB of 6.7 TiB) in 6h 6m 41s, read: 4.0 GiB/s, write: 0 B/s
2023-11-04 21:03:01 INFO:  94% (6.3 TiB of 6.7 TiB) in 6h 6m 58s, read: 4.0 GiB/s, write: 240.9 KiB/s
2023-11-04 21:03:18 INFO:  95% (6.4 TiB of 6.7 TiB) in 6h 7m 15s, read: 4.2 GiB/s, write: 0 B/s
2023-11-04 21:03:34 INFO:  96% (6.4 TiB of 6.7 TiB) in 6h 7m 31s, read: 4.3 GiB/s, write: 0 B/s
2023-11-04 21:03:49 INFO:  97% (6.5 TiB of 6.7 TiB) in 6h 7m 46s, read: 4.4 GiB/s, write: 0 B/s
2023-11-04 21:07:02 INFO:  98% (6.6 TiB of 6.7 TiB) in 6h 10m 59s, read: 357.5 MiB/s, write: 64.9 MiB/s
2023-11-04 21:09:52 INFO:  99% (6.6 TiB of 6.7 TiB) in 6h 13m 49s, read: 410.9 MiB/s, write: 36.7 MiB/s
2023-11-04 21:11:24 INFO: 100% (6.7 TiB of 6.7 TiB) in 6h 15m 21s, read: 761.7 MiB/s, write: 8.0 MiB/s
2023-11-04 21:11:25 INFO: backup is sparse: 3.27 TiB (48%) total zero data
2023-11-04 21:11:25 INFO: backup was done incrementally, reused 6.65 TiB (99%)
2023-11-04 21:11:25 INFO: transferred 6.68 TiB in 22522 seconds (311.2 MiB/s)
2023-11-04 21:11:25 INFO: adding notes to backup
2023-11-04 21:11:25 INFO: Finished Backup of VM 109 (06:15:35)
Hier handelt es sich um ein VM Backup, dort gilt das von mir oben beschriebene verhalten so nicht. Was allerdings auffällt, ist das die dirty-bitmap für alle virtuellen disks neu kreiert wurde, daher müssen hier die Blöcke alle erneut gelesen/gebackedup werden. Wurde die VM zwischen den Backup runs gestoppt? Ist der Qemu guest agent in der VM installiert?

Die von mir oben erwähnten Patches betreffen nur LXC Backups und sind derzeit noch nicht gepackaged, es bedarf also noch etwas Gedult was dies betrifft.
 
Last edited:
Hier handelt es sich um ein VM Backup, dort gilt das von mir oben beschriebene verhalten so nicht. Was allerdings auffällt, ist das die dirty-bitmap für alle virtuellen disks neu kreiert wurde, daher müssen hier die Blöcke alle erneut geschrieben werden. Wurde die VM zwischen den Backup runs gestoppt? Ist der Qemu guest agent in der VM installiert?

Die von mir oben erwähnten Patches betreffen nur LXC Backups und sind derzeit noch nicht gepackaged, es bedarf also noch etwas Gedult was dies betrifft.
Ja, bei mir handelt es sich nur um VMs.
Generell habe ich dieses Problem, wenn ich den Host wegen eines neuen Kernels neu starte bzw. wenn ich ab und an bei der VM config etwas ändere (zB mehr RAM oder mehr CPU Kerne).
Dadurch muss ich die VM neustarten und er "vergisst" den Dirty Bitmap Status.

Beim nächsten Backup habe ich dann genau das geschilderte Dilemma, dass es lange dauert und eine sehr hohe IO-Last durchs Lesen verursacht.

Und ja, der QEMU Guest Agent läuft in der aktuellsten Version auf der VM.
 
Last edited:
Ja, bei mir handelt es sich nur um VMs.
Generell habe ich dieses Problem, wenn ich den Host wegen eines neuen Kernels neu starte bzw. wenn ich ab und an bei der VM config etwas ändere (zB mehr RAM oder mehr CPU Kerne).
Dadurch muss ich die VM neustarten und er "vergisst" den Dirty Bitmap Status.

Beim nächsten Backup habe ich dann genau das geschilderte Dilemma, dass es lange dauert und eine sehr hohe IO-Last durchs Lesen verursacht.
Wenn eine VM gestoppt wird, dann muss die dirty-bitmap neu generiert werden. Um ein stoppen der VM zu vermeiden, sollte diese mittels live-migration auf einen anderen Host verschoben werden.
 
Kann man diese dirty-bitmap nicht in ein File auslagern, so dass er das nur noch einlesen muss?
Wenn ich 7 TB hin und herzukopieren hätte ich auch eine entsprechend hohe IO Last auf dem Server.
 
Das wäre natürlich eine sehr geniale Lösung.
Ist diese schon irgendwo in eine Dev-Pipeline oder aktuell noch "nur" eine Idee?
 
QEMU 8.2 würde das ja generell unterstützen:

Bitmap Persistence


As outlined in Supported Image Formats, QEMU can persist bitmaps to qcow2files. Demonstrated in Creation: block-dirty-bitmap-add, passingpersistent: true to block-dirty-bitmap-add will persist that bitmap todisk.


Persistent bitmaps will be automatically loaded into memory upon load, andwill be written back to disk upon close. Their usage should be mostlytransparent.


However, if QEMU does not get a chance to close the file cleanly, the bitmapwill be marked as +inconsistent at next load and considered unsafe to usefor any operation. At this point, the only valid operation on such bitmaps isblock-dirty-bitmap-remove.


Losing a bitmap in this way does not invalidate any existing backups that havebeen made from this bitmap, but no further backups will be able to be issuedfor this chain.

https://www.qemu.org/docs/master/interop/bitmaps.html#bitmap-persistence
 
Das wäre natürlich eine sehr geniale Lösung.
Ist diese schon irgendwo in eine Dev-Pipeline oder aktuell noch "nur" eine Idee?
Soweit ich weiss ist da derzeit niemand aktiv am Ausarbeiten eine Lösung dran, daher keine Versprechungen hinsichtlich einer Roadmap dieses Features. Eventuelle Vorschläge bitte im Bugzilla anführen, das Forum eignet sich dazu nur bedingt.
 
dafuer braeuchte es wahrscheinlich doch einiges wissen ueber qemu und PVE interna (oder die bereitschaft, sich die anzueignen ;)). mitdiskutieren ob geplante interfaces/einschraenkungen/.. aus user-sicht sinnvoll sind, und testen von patches wenn sie geschrieben sind ist aber auch viel wert!
 
  • Like
Reactions: Chris
Kann man euch dahingehend irgendwie unterstützen - zB dass man selbst im Code herumfuhrwerkt?
Code contributions sind immer willkommen, jedoch sollten Implementationsdetails eventuell vorher diskutiert werden um allen beteiligten Zeit zu sparen. Aber ich sehe das issue wird ja bereits aktiv diskutiert.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!