Poor performance with qmrestore

kefear

New Member
Oct 8, 2010
10
0
1
Hi,
I'm experiencing poor performance when restoring KVM virtual machine with qmrestore.
I have LVMs made on top of RAID1 made of two SAS 15k disks. To increase restore speed I played a little bit with qmrestore script, defining different block sizes with dd.
I have write-cache enable on my disk array and can provide any additional information.
Is there any other way to decrease restore time ? Maybe changing PE Size of Volume Group ?

I was able to do the best with bs=512k. Restore of 20GB VM took ~10mins.
Code:
INFO: restore QemuServer backup 'vzdump-qemu-101-2010_10_01-15_03_38.tar' using ID 101
INFO: extracting 'qemu-server.conf' from archive
INFO: extracting 'vm-disk-virtio0.raw' from archive
INFO:   Rounding up size to full physical extent 20.01 GB
INFO:   Logical volume "vm-101-disk-1" created
INFO: new volume ID is 'vm1:vm-101-disk-1'
INFO: restore data to '/dev/vg1/vm-101-disk-1' (21479030784 bytes)
INFO: 327389+589 records in
INFO: 327389+589 records out
INFO: 21479030784 bytes (21 GB) copied, 618.504 s, "34.7 MB/s"
INFO: restore QemuServer backup 'vzdump-qemu-101-2010_10_01-15_03_38.tar' successful

pveperf
Code:
CPU BOGOMIPS:      40002.37
REGEX/SECOND:      682457
HD SIZE:           9.61 GB (/dev/mapper/222d7000155c26531-part1)
BUFFERED READS:    146.67 MB/sec
AVERAGE SEEK TIME: 3.90 ms
FSYNCS/SECOND:     544.62
DNS EXT:           48.95 ms

pveversion -v
Code:
pve-manager: 1.6-2 (pve-manager/1.6/5087)
running kernel: 2.6.32-2-pve
proxmox-ve-2.6.32: 1.6-13
pve-kernel-2.6.32-3-pve: 2.6.32-13
pve-kernel-2.6.32-2-pve: 2.6.32-8
qemu-server: 1.1-18
pve-firmware: 1.0-7
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-7
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-1
ksm-control-daemon: 1.0-4

The kernel version we use is:
2.6.32-2-pve
 
Last edited:
how much faster is that? Does that modify the resulting file size (when used to restore to a raw file)?

Up to 40% depending on the snapshot size and block size for dd. As far as I could observe file size stays the same.
 
Hi,
I'm experiencing poor performance when restoring KVM virtual machine with qmrestore.
I have LVMs made on top of RAID1 made of two SAS 15k disks. To increase restore speed I played a little bit with qmrestore script, defining different block sizes with dd.
I have write-cache enable on my disk array and can provide any additional information.
Is there any other way to decrease restore time ? Maybe changing PE Size of Volume Group ?

I was able to do the best with bs=512k. Restore of 20GB VM took ~10mins.
Code:
INFO: restore QemuServer backup 'vzdump-qemu-101-2010_10_01-15_03_38.tar' using ID 101
INFO: extracting 'qemu-server.conf' from archive
INFO: extracting 'vm-disk-virtio0.raw' from archive
INFO:   Rounding up size to full physical extent 20.01 GB
INFO:   Logical volume "vm-101-disk-1" created
INFO: new volume ID is 'vm1:vm-101-disk-1'
INFO: restore data to '/dev/vg1/vm-101-disk-1' (21479030784 bytes)
INFO: 327389+589 records in
INFO: 327389+589 records out
INFO: 21479030784 bytes (21 GB) copied, 618.504 s, "34.7 MB/s"
INFO: restore QemuServer backup 'vzdump-qemu-101-2010_10_01-15_03_38.tar' successful

pveperf
Code:
CPU BOGOMIPS:      40002.37
REGEX/SECOND:      682457
HD SIZE:           9.61 GB (/dev/mapper/222d7000155c26531-part1)
BUFFERED READS:    146.67 MB/sec
AVERAGE SEEK TIME: 3.90 ms
FSYNCS/SECOND:     544.62
DNS EXT:           48.95 ms

pveversion -v
Code:
pve-manager: 1.6-2 (pve-manager/1.6/5087)
running kernel: 2.6.32-2-pve
proxmox-ve-2.6.32: 1.6-13
pve-kernel-2.6.32-3-pve: 2.6.32-13
pve-kernel-2.6.32-2-pve: 2.6.32-8
qemu-server: 1.1-18
pve-firmware: 1.0-7
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-7
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-1
ksm-control-daemon: 1.0-4

The kernel version we use is:
2.6.32-2-pve

I didn't know that qmrestore use dd but I totally agree with that. I'm always use dd with bs=1M for the best performance while copy drives or partitions.
 
I have managed to double the speed of qmrestore with 'oflag=direct'. With this one it writes directly to disk instead of cache. Right know I am using this setup at qmrestore script:
exec 'dd', "of=/dev/null", "bs=64k", "oflag=direct";
and it gives me about 65MB/s (write to lvm) which is quite reasonable for a single SAS 15krpm disk.
 
I doubt that this is the right patch (of=/dev/null is only used for the info command)?

Oh my bad! I meant:
if ($opts->{prealloc} || $format ne 'raw' || (-b $path)) {
exec 'dd', "of=$path", "bs=64k", "oflag=direct";
die "couldn't exec dd: $!\n";
 
You're right. But in my case if I need to restore virtual machine it means that something bad happened and I don't care about performance until it goes online.
Needless to say, one should have emergency server where It would be possible to restore vm as fast as possible and then migrate it to it's place when it's safe and it won't cause bottlenecks on performance.
 
I have done some tests.

Sources:
- Production Cluster (3 VMs running on the current Server and 5 on another)
- RAID 0 -> DRBD -> LVM
- KVM VM 1 drive 16Gb

Here are results:
bs is Default (512 bytes) -> 10.6 MB/s
bs=128K -> 19.6 MB/s
bs=256K -> 20.6 MB/s
bs=512K -> 20.2 MB/s
bs=1M -> 19.7 MB/s
bs=256K oflag=direct -> 15.3 MB/s
 
It would be interesting when you reach the maximim:
bs=8k
bs=16k
bs=32k
...
As I investigated on the differnent systems with and without raid the best performance would be always between 32K and 1M.
In my current configuration maximum is on bs=256K.
 
Last edited:
Just for fun :)
dd if=/dev/zero of=/dev/null on my netbook

default -> 215 MB/s
bs=2K -> 824 MB/s
bs=4K -> 2.0 GB/s
bs=8K -> 2.0 GB/s
bs=16K -> 2.5 GB/s
bs=32K -> 2.4 GB/s
bs=64K -> 2.6 GB/s
bs=128K -> 2.7 GB/s
bs=256K -> 2.7 GB/s
bs=512K -> 2.3 GB/s
bs=1M -> 1.4 GB/s
bs=4M -> 1.1 GB/s
bs=6M -> 1.1 GB/s
bs=8M -> 1.1 GB/s
bs=16M -> 1.1 GB/s
bs=32M -> 1.1 GB/s
bs=64M -> 1.1 GB/s
bs=128M -> 1.1 GB/s
bs=256M -> 1.1 GB/s
bs=512M -> 1.0 GB/s
bs=1024M -> 1.0 GB/s
bs=1600M -> 1.0 GB/s
 
Last edited:
I'll check it soon. I think it's better to use 'K' instead of 'k' because of misunderstanding possibility.
from man dd:
BLOCKS and BYTES may be followed by the following multiplicative suffixes: c =1, w =2, b =512, kB =1000, K =1024, MB =1000*1000, M =1024*1024, xM =M GB =1000*1000*1000, G =1024*1024*1024, and so on for T, P, E, Z, Y.