Q: vzdump.conf - changes are ignored / how to make change effective?

fortechitsolutions

Renowned Member
Jun 4, 2008
434
46
93
Hi, I wonder if anyone can comment / has seen this before? I'm trying to tune config for a VM Backup on a client Proxmox host (Proxmox 5.4.13) (hosted on a classic OVH box, 2x2Tb SW Raid / xeon cpu with 8 thread vCPU cores / plenty of ram / and an NFS "backup storage mount" service as a storage tank for VM Backups.

For many months, the base config of platform works fine, but has one annoyance - the nightly backup job of a ~100gig KVM Based Linux VM takes upwards of 12 hours to complete ("snap" based backup of VM). I was trying to improve config to make things better,

(a) use a local temp storage first for backup, which is presumably faster than pushing over vanilla gig-ether to NFS storage target
(b) install enable and configure use of 'pigz' instead of 'gzip' based backup compression.

The settings I adjusted in /etc/vzdump.conf were the 2 lines shown below which are no longer 'commented out'.

Code:
root@ns528691:/etc# cat vzdump.conf
# vzdump default settings
# TDC mar-2020 enable tmpdir and pigz see if this speeds up backups.

tmpdir: /var/lib/vz/dumptemp
#dumpdir: DIR
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: KBPS
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
#size: MB
#stdexcludes: BOOLEAN
#mailto: ADDRESSLIST
#maxfiles: N
#script: FILENAME
#exclude-path: PATHLIST
pigz: 7


The backup job ran again last night as per the prior configured scheduled backup setup. And according to the logs, from what I can tell, it has simply ignored my 2 config adjustment lines in the vzdump.conf file.

I can't tell, if I am meant to restart some service? or do something else? to try to force the scheduled backup job to make use of the config adjustments.

Any pointers or hints are appreciated.

Thanks,

Tim


Output from VZDUMP job below for ref:

Code:
root@ns528691:/mnt/pve/nfs-backups/dump# cat vzdump-qemu-103-2020_03_06-18_15_02.log
2020-03-06 18:15:02 INFO: Starting Backup of VM 103 (qemu)
2020-03-06 18:15:02 INFO: status = running
2020-03-06 18:15:04 INFO: update VM 103: -lock backup
2020-03-06 18:15:04 INFO: VM Name: deb9-vmhost
2020-03-06 18:15:04 INFO: include disk 'virtio0' 'local:103/vm-103-disk-1.qcow2' 200G
2020-03-06 18:15:04 INFO: backup mode: snapshot
2020-03-06 18:15:04 INFO: ionice priority: 7
2020-03-06 18:15:04 INFO: creating archive '/mnt/pve/nfs-backups/dump/vzdump-qemu-103-2020_03_06-18_15_02.vma.gz'
2020-03-06 18:15:06 INFO: started backup task '90eb9bdf-45b0-4e9f-a076-c7696978dfb5'
2020-03-06 18:15:09 INFO: status: 0% (292683776/214748364800), sparse 0% (135176192), duration 3, read/write 97/52 MB/s
...
2020-03-07 00:14:04 INFO: status: 31% (66572255232/214748364800), sparse 1% (2216632320), duration 21538, read/write 2/2 MB/s
...
2020-03-07 06:09:55 INFO: status: 61% (132235591680/214748364800), sparse 3% (7793594368), duration 42889, read/write 13/2 MB/s
...
2020-03-07 06:57:45 INFO: status: 98% (211153977344/214748364800), sparse 37% (79585349632), duration 45759, read/write 3/2 MB/s
2020-03-07 06:57:46 INFO: status: 100% (214748364800/214748364800), sparse 38% (83179737088), duration 45760, read/write 3594/0 MB/s
2020-03-07 06:57:46 INFO: transferred 214748 MB in 45760 seconds (4 MB/s)
2020-03-07 06:58:08 INFO: archive file size: 106.63GB
2020-03-07 06:58:08 INFO: delete old backup '/mnt/pve/nfs-backups/dump/vzdump-qemu-103-2020_02_28-18_15_02.vma.lzo'
2020-03-07 06:58:57 INFO: Finished Backup of VM 103 (12:43:55)
root@ns528691:/mnt/pve/nfs-backups/dump#

note the VM Disk while it is a '200gig' disk - it has only ~100gigs of content and is 'thin' disk so is not fully provisioned with content.
(hence me saying the disk is ~100gig disk  in my problem description above.)
 
(a) use a local temp storage first for backup, which is presumably faster than pushing over vanilla gig-ether to NFS storage target

tmpdir is not for the whole backup, but only for some meta tmp files, this can be a bit misleading - I know.
One way to do this now could be to use a local directory storage as target and move the backup over to the remote one with a backup hook.

(b) install enable and configure use of 'pigz' instead of 'gzip' based backup compression.
mnt/pve/nfs-backups/dump/vzdump-qemu-103-2020_02_28-18_15_02.vma.lzo

pigz is a "gz" based compressor, but you're using "LZO" as compressor for the backup job. With LZO this setting cannot do anything..
 
Thanks so much for the comments. Glad to understand better the way the tmp thing works. ie, this is not so much going to help me here / unless I adjust to do local path dump. Is there a sample in docs somewhere to hint how I can setup a post-backup 'hook' to do a move of the backup output from (local) to (NFS) ?

Otherwise. The gzip vs lzo thing is weird. I did make the config adjust in my 'baackups' config via GUI yesterday from LZO>GZIP. Saved it.

Double checked it just now, it was still showing as "GZIP" as the selected compression method. I flipped it to none. Saved. Flipped back to GZIP.

I guess I'll see tonight if the change to GZIP sticks or if it keeps ignoring me

bit of a puzzle.

Thanks for your help - it is greatly appreciated!


Tim
 
Is there a sample in docs somewhere to hint how I can setup a post-backup 'hook' to do a move of the backup output from (local) to (NFS) ?

A basic example hook script can be found at every Proxmox VE host, check out:
/usr/share/doc/pve-manager/examples/vzdump-hook-script.pl

You can configure to call this script globally in: /etc/vzdump.conf
Or per vzdump call with the '--script' switch.
 
small followup, this worked fine - made a copy of the hook script example; made sure it was chmod 700; then it was executed properly after the vzdump jobs called it. If the script was not executable then the jobs failed silently, but a small command-line backup job test helped debug that.

So, this worked as desired. What I ended up figuring out at 'high level' is that - with a ~250Mbps rate-limited proxmox NIC, doing backups to a NFS storage tank is not super-fast. ie, that is simply a bottleneck I cannot work around, if I insist on pushing that much data.

So I tweaked my backup strategy so I don't do full VM Dumps as often / do an rsync file-based approach from inside the VM, onto a target enpoint that is - effectively in the same NFS storage tank where I was dumping my VM Dumps before. Unsuprisingly the backup jobs are way faster, since they are a few orders of magnitude smaller now. So it is all good.

Anyhoo. Just wanted to post a brief footnote to the thread, and also a thanks! for all the help.

Tim
 
If the script was not executable then the jobs failed silently

Seems something we could improve with nicer error message pointing to the actual culprit, I opened https://bugzilla.proxmox.com/show_bug.cgi?id=2634

So I tweaked my backup strategy so I don't do full VM Dumps as often / do an rsync file-based approach from inside the VM, onto a target enpoint that is - effectively in the same NFS storage tank where I was dumping my VM Dumps before. Unsuprisingly the backup jobs are way faster, since they are a few orders of magnitude smaller now. So it is all good.

Yeah, guest-intern-aware backups can be often a lot more efficient as there it can be decided what's relevant and one is on a filesystem level, not blockdevice level.

Anyway, great that you resolved your issues and thanks for reporting back!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!