vzdump speed is incorrect?

whinpo

Renowned Member
Jan 11, 2010
140
0
81
I've spent my afternoon trying to understand why my backups are so slow... 3Mbits...

That's what I've noticed :

When I backup a KVM machine located on an iSCSI LVM to an iSCSI storage : i got
Code:
neptune2:~#  vzdump --node 1 --snapshot --storage Proxmox-Backup --bwlimit 100000 --mailto xxx@wwwww.xom 101
INFO: starting new backup job: vzdump --node 1 --snapshot --storage Proxmox-Backup --bwlimit 100000 --mailto xxx@wwwww.xom 101
INFO: Starting Backup of VM 101 (qemu)
INFO: running
INFO: status = running
INFO: backup mode: snapshot
INFO: bandwidth limit: 100000 KB/s
INFO:   Logical volume "vzsnap-neptune2-0" created
INFO: creating archive '/var/lib/vz/Proxmox-Backup/vzdump-qemu-101-2010_03_23-16_42_48.tar'
INFO: adding '/var/lib/vz/Proxmox-Backup/vzdump-qemu-101-2010_03_23-16_42_48.tmp/qemu-server.conf' to archive ('qemu-server.conf')
INFO: adding '/dev/Proxmox-SR/vzsnap-neptune2-0' to archive ('vm-disk-virtio0.raw')
INFO: Total bytes written: 1647938560 (3.36 MiB/s)
INFO: archive file size: 1.53GB
INFO: delete old backup '/var/lib/vz/Proxmox-Backup/vzdump-qemu-101-2010_03_23-12_13_50.tgz'
INFO:   Logical volume "vzsnap-neptune2-0" successfully removed
got signal
INFO: Finished Backup of VM 101 (00:07:54)
INFO: Backup job finished successfuly

the VM has a 32GB disk.

Checking with iptraf (http://packages.debian.org/stable/net/iptraf) I can see :
1) the data volume sent is : 39411MB
2) the iSCSI nic runs always at 700 000 Kbits/s...


In the dir, i can see the tar file is 1.6G => sent in very few seconds to the iSCSI storage backup

(39411MB X 1024) X 8 => 322 854 912 Kbits
7:54 mins -> 474 s
322 854 912/474 = 681 128 Kbits/s = 665 Mbits/s -> what IPTraf tells me
665 Mbits/s = 83MBytes/s

vzdump :
1647938560B / 1024 =1 609 315
7:54 mins -> 474 s
1 609 315 / 747 = 3.395 Kbits/s =3.3 Mbits/s

should'nt vzdump indicate the speed based on the data volume transferred instead of the tar file size?
 
Last edited:
there are several read processes before data is written, therefore you cannot compare the 3.3 mbits/s against your network speed. e.g. vzdump reads the whole disk to see where the data is before the real backup can start.

I agree that the number finally displayed in the logs is not clear in the first run, but you already got it how it is finally calculated - what do you suggest in detail to show in the log?
 
I've added in bold things I think it could make it clearer (for example to prevent someone to make perfs tests during an afternoon to understand why he got only 3Mbits.. :))


Code:
neptune2:~#  vzdump --node 1 --snapshot --storage Proxmox-Backup --bwlimit 100000 --mailto xxx@wwwww.xom 101
INFO: starting new backup job: vzdump --node 1 --snapshot --storage Proxmox-Backup --bwlimit 100000 --mailto xxx@wwwww.xom 101
INFO: Starting Backup of VM 101 (qemu)
INFO: running
INFO: status = running
INFO: backup mode: snapshot
INFO: bandwidth limit: 100000 KB/s
[B]INFO: exluded path(es) = none[/B]
[B]INFO: total disk size to backup : XXGB[/B]
INFO:   Logical volume "vzsnap-neptune2-0" created
INFO: creating archive '/var/lib/vz/Proxmox-Backup/vzdump-qemu-101-2010_03_23-16_42_48.tar'
INFO: adding '/var/lib/vz/Proxmox-Backup/vzdump-qemu-101-2010_03_23-16_42_48.tmp/qemu-server.conf' to archive ('qemu-server.conf')
INFO: adding '/dev/Proxmox-SR/vzsnap-neptune2-0' to archive ('vm-disk-virtio0.raw')
[B]INFO: Total bytes received/read : XXGB in YY mins (XXMbits/s)[/B] (real read speed)
[B]INFO: Total bytes written: 1647938560 (XXMbits/s) [/B]   (real write speed)
INFO: archive file size: 1.53GB
INFO: delete old backup '/var/lib/vz/Proxmox-Backup/vzdump-qemu-101-2010_03_23-12_13_50.tgz'
INFO:   Logical volume "vzsnap-neptune2-0" successfully removed
got signal
INFO: Finished Backup of VM 101 (00:07:54)
INFO: Backup job finished successfuly
 
there are several read processes before data is written

Hi tom,

why are multiple read processes and is there a way to disable this? I have a daily backup which took 21:33 hours for 467 GB (compressed data from local LVM to CIFS) today.

At the moment I am testing this with compression from Btrfs at the backup server instead of compression from vzdump.

But I think doing a full speed backup with only one read would be faster.

esco
 
why are multiple read processes and is there a way to disable this?

vzdump scans for sparse data. Unfortunately, sparse file information need to be written at the start of a 'tar' file. So we need to read twice.

I have searched long time for an archive format which can store sparse file information in a reasonable way - but it seems such format does not exist (maybe we should just invent our own format for vm image data).
 
vzdump scans for sparse data. Unfortunately, sparse file information need to be written at the start of a 'tar' file. So we need to read twice.

Hi dietmar,

thanks for the information. This explains a lot. But I think if the backup is compressed there is no need to handle sparse data. A lot of "0" can be compressed very effective ;)

So there should be an option to disable the sparse handling or doing this automatically if the backup is compressed.

esco
 
thanks for the information. This explains a lot. But I think if the backup is compressed there is no need to handle sparse data. A lot of "0" can be compressed very effective ;)

yes

So there should be an option to disable the sparse handling or doing this automatically if the backup is compressed.

OK, will consider to add that feature.
 
But I think if the backup is compressed there is no need to handle sparse data. A lot of "0" can be compressed very effective ;)

Seems that assumption is totally wrong, sorry:

*gzip needs much more cpu
*gzip --rsyncable (we want to use that) produce much larger files and needs even more cpu.
 
Last edited:
Here is some data. A vzdump backup of vm 1002 takes about 3 minutes.

# time gzip -c /var/lib/vz/images/1002/vm-1002-disk-1.raw >t1.gz

real 6m10.962s
user 5m32.406s
sys 0m25.645s

# ls -l t1.gz
-rw-r--r-- 1 root root 374909304 2011-04-19 07:58 t1.gz

So file size is reasonable, but backup is gzip is much slower that a vzdump backup.

And it gets worse when we use --rsyncable:

# time gzip -c /var/lib/vz/images/1002/vm-1002-disk-1.raw --rsyncable >t2.gz

real 11m45.064s
user 10m56.430s
sys 0m28.192s

# ls -l t2.gz
-rw-r--r-- 1 root root 1370555301 2011-04-19 08:12 t2.gz
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!