Proxmox VM backup taking forever

jslanier

Well-Known Member
Jan 19, 2019
45
0
46
41
Hey folks,
It looks like my disk speeds are good, but the backup process is reading through all of the empty space on one of the 2 provisioned disks. The VM I am backing up has a 5 TB thin-provisioned disk, with roughly 700 GB being used. The backup process is reading through all of the empty space on that 5 TB disk, causing the backup to take 8 hours instead of < 3. Is there an option I can change to prevent the reading of all of this unused disk space during the backup process?

Here is what has happened so far on this ongoing backup (local-zfs is an SSD mirror and spinners is a striped mirror of 4 4TB spinning disks):

INFO: starting new backup job: vzdump 104 --mailto [redacted] --mode snapshot --compress lzo --remove 0 --storage backup --node pve-defender1
INFO: Starting Backup of VM 104 (qemu)
INFO: Backup started at 2019-11-16 12:39:29
INFO: status = running
INFO: update VM 104: -lock backup
INFO: VM Name: 2019-defender1
INFO: include disk 'scsi0' 'local-zfs:vm-104-disk-0' 80G
INFO: include disk 'scsi2' 'spinners:vm-104-disk-1' 5T
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/backup/dump/vzdump-qemu-104-2019_11_16-12_39_29.vma.lzo'
INFO: started backup task 'd3a1a931-2ab9-4654-aac1-c90f1e7f3188'
INFO: status: 0% (547422208/5583457484800), sparse 0% (163282944), duration 3, read/write 182/128 MB/s
INFO: status: 1% (55883268096/5583457484800), sparse 0% (33349271552), duration 412, read/write 135/54 MB/s
INFO: status: 2% (111740649472/5583457484800), sparse 1% (63748554752), duration 827, read/write 134/61 MB/s
INFO: status: 3% (167593246720/5583457484800), sparse 1% (64424816640), duration 1478, read/write 85/84 MB/s
INFO: status: 4% (223397609472/5583457484800), sparse 1% (64531513344), duration 2211, read/write 76/75 MB/s
INFO: status: 5% (279219077120/5583457484800), sparse 1% (64701673472), duration 2870, read/write 84/84 MB/s
INFO: status: 6% (335086813184/5583457484800), sparse 1% (64720166912), duration 3494, read/write 89/89 MB/s
INFO: status: 7% (390894190592/5583457484800), sparse 1% (64774176768), duration 4118, read/write 89/89 MB/s
INFO: status: 8% (446685184000/5583457484800), sparse 1% (65772040192), duration 4760, read/write 86/85 MB/s
INFO: status: 9% (502522904576/5583457484800), sparse 1% (65774034944), duration 5388, read/write 88/88 MB/s
INFO: status: 10% (558374649856/5583457484800), sparse 1% (65776046080), duration 6016, read/write 88/88 MB/s
INFO: status: 11% (614221676544/5583457484800), sparse 1% (66545344512), duration 6640, read/write 89/88 MB/s
INFO: status: 12% (670050615296/5583457484800), sparse 1% (66551488512), duration 7279, read/write 87/87 MB/s
INFO: status: 13% (725878636544/5583457484800), sparse 1% (66619899904), duration 7912, read/write 88/88 MB/s
INFO: status: 14% (781735624704/5583457484800), sparse 1% (66725986304), duration 8571, read/write 84/84 MB/s
INFO: status: 15% (837524455424/5583457484800), sparse 1% (79338131456), duration 9201, read/write 88/68 MB/s
INFO: status: 16% (893525884928/5583457484800), sparse 2% (135339560960), duration 9477, read/write 202/0 MB/s
INFO: status: 17% (949309669376/5583457484800), sparse 3% (191123345408), duration 9744, read/write 208/0 MB/s
INFO: status: 18% (1005149749248/5583457484800), sparse 4% (246963425280), duration 10015, read/write 206/0 MB/s
INFO: status: 19% (1061026725888/5583457484800), sparse 5% (302840401920), duration 10291, read/write 202/0 MB/s
INFO: status: 20% (1116851404800/5583457484800), sparse 6% (358665080832), duration 10562, read/write 205/0 MB/s
INFO: status: 21% (1172614348800/5583457484800), sparse 7% (414428024832), duration 10836, read/write 203/0 MB/s
INFO: status: 22% (1228368904192/5583457484800), sparse 8% (470182580224), duration 11101, read/write 210/0 MB/s
INFO: status: 23% (1284295229440/5583457484800), sparse 9% (526108905472), duration 11365, read/write 211/0 MB/s
INFO: status: 24% (1340210216960/5583457484800), sparse 10% (582023892992), duration 11618, read/write 221/0 MB/s
INFO: status: 25% (1395936460800/5583457484800), sparse 11% (637750136832), duration 11879, read/write 213/0 MB/s
INFO: status: 26% (1451705434112/5583457484800), sparse 12% (693519110144), duration 12137, read/write 216/0 MB/s
INFO: status: 27% (1507682680832/5583457484800), sparse 13% (749496356864), duration 12401, read/write 212/0 MB/s
--I redacted this section to save characters--
INFO: status: 77% (4299387043840/5583457484800), sparse 63% (3541200719872), duration 25600, read/write 217/0 MB/s
INFO: status: 78% (4355189833728/5583457484800), sparse 64% (3597003509760), duration 25863, read/write 212/0 MB/s
INFO: status: 79% (4410971193344/5583457484800), sparse 65% (3652784869376), duration 26128, read/write 210/0 MB/s
INFO: status: 80% (4466867437568/5583457484800), sparse 66% (3708681113600), duration 26388, read/write 214/0 MB/s
INFO: status: 81% (4522643619840/5583457484800), sparse 67% (3764457295872), duration 26659, read/write 205/0 MB/s
INFO: status: 82% (4578602254336/5583457484800), sparse 68% (3820415930368), duration 26918, read/write 216/0 MB/s
INFO: status: 83% (4634400129024/5583457484800), sparse 69% (3876213805056), duration 27183, read/write 210/0 MB/s
INFO: status: 84% (4690288902144/5583457484800), sparse 70% (3932102578176), duration 27458, read/write 203/0 MB/s
INFO: status: 85% (4746025238528/5583457484800), sparse 71% (3987838914560), duration 27728, read/write 206/0 MB/s
INFO: status: 86% (4801867218944/5583457484800), sparse 72% (4043680894976), duration 27995, read/write 209/0 MB/s
INFO: status: 87% (4857647661056/5583457484800), sparse 73% (4099461337088), duration 28272, read/write 201/0 MB/s
INFO: status: 88% (4913448157184/5583457484800), sparse 74% (4155261833216), duration 28549, read/write 201/0 MB/s
INFO: status: 89% (4969282666496/5583457484800), sparse 75% (4211096342528), duration 28818, read/write 207/0 MB/s
INFO: status: 90% (5025177337856/5583457484800), sparse 76% (4266991013888), duration 29080, read/write 213/0 MB/s
INFO: status: 91% (5081107398656/5583457484800), sparse 77% (4322921074688), duration 29344, read/write 211/0 MB/s
INFO: status: 92% (5136851599360/5583457484800), sparse 78% (4378665275392), duration 29603, read/write 215/0 MB/s

As you can see, all of the used disk space was read at the 9201 second mark (2.55 hours). The last 6 hours have been spent reading through an empty disk. Is there anything I can do to prevent this (other than creating a smaller disk to copy the data to)?

Thanks,
Stan
 
Last edited:
Proxmox VM config for the disk:
scsi2: spinners:vm-104-disk-1,discard=on,size=5T

zfs properties of the disk:
root@pve-defender1:~# zfs get all spinners/vm-104-disk-1
NAME PROPERTY VALUE SOURCE
spinners/vm-104-disk-1 type volume -
spinners/vm-104-disk-1 creation Tue Oct 23 9:00 2018 -
spinners/vm-104-disk-1 used 706G -
spinners/vm-104-disk-1 available 5.52T -
spinners/vm-104-disk-1 referenced 697G -
spinners/vm-104-disk-1 compressratio 1.00x -
spinners/vm-104-disk-1 reservation none default
spinners/vm-104-disk-1 volsize 5T local
spinners/vm-104-disk-1 volblocksize 4K -
spinners/vm-104-disk-1 checksum on default
spinners/vm-104-disk-1 compression lz4 inherited from spinners
spinners/vm-104-disk-1 readonly off default
spinners/vm-104-disk-1 createtxg 29712 -
spinners/vm-104-disk-1 copies 1 default
spinners/vm-104-disk-1 refreservation none local
spinners/vm-104-disk-1 guid 10118160324989352140 -
spinners/vm-104-disk-1 primarycache all default
spinners/vm-104-disk-1 secondarycache all default
spinners/vm-104-disk-1 usedbysnapshots 9.27G -
spinners/vm-104-disk-1 usedbydataset 697G -
spinners/vm-104-disk-1 usedbychildren 0B -
spinners/vm-104-disk-1 usedbyrefreservation 0B -
spinners/vm-104-disk-1 logbias latency default
spinners/vm-104-disk-1 objsetid 1159 -
spinners/vm-104-disk-1 dedup off default
spinners/vm-104-disk-1 mlslabel none default
spinners/vm-104-disk-1 sync standard default
spinners/vm-104-disk-1 refcompressratio 1.00x -
spinners/vm-104-disk-1 written 12.5G -
spinners/vm-104-disk-1 logicalused 700G -
spinners/vm-104-disk-1 logicalreferenced 691G -
spinners/vm-104-disk-1 volmode default default
spinners/vm-104-disk-1 snapshot_limit none default
spinners/vm-104-disk-1 snapshot_count none default
spinners/vm-104-disk-1 snapdev hidden default
spinners/vm-104-disk-1 context none default
spinners/vm-104-disk-1 fscontext none default
spinners/vm-104-disk-1 defcontext none default
spinners/vm-104-disk-1 rootcontext none default
spinners/vm-104-disk-1 redundant_metadata all default
spinners/vm-104-disk-1 encryption off default
spinners/vm-104-disk-1 keylocation none default
spinners/vm-104-disk-1 keyformat none default
spinners/vm-104-disk-1 pbkdf2iters 0 default
 
You need to enable thin provisioning on the pve zfs storage configuration, and of cause, your VM needs to issue trim commands - this does not happen automatically. For example, run fstrim inside the VM.
 
You need to enable thin provisioning on the pve zfs storage configuration, and of cause, your VM needs to issue trim commands - this does not happen automatically. For example, run fstrim inside the VM.
I already had thin provisioning enabled on my zfs storage configuration. And inside the VM, I ran Optimize-Volume -ReTrim on both of my disks. However, this did not seem to make a difference in the backup time. Any other suggestions?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!