Painfully slow KVM VM restore

IceCub

New Member
Dec 29, 2013
5
0
1
Hello everyone,

I am running Proxmox 3.1 on a Dual Xeon Server with 16 HDDs arranged in 8 x RAID 1 arrays using mdadm.

A few days ago I backed up a KVM VM (2 x 100GB disks, full back-up using LZO), before making some changes to it. The VM and the back-up are on separate RAID arrays. The back-up went smoothly and took, as usual, ~ 30 min to complete. A few hours later I had to restore the VM and I noticed it was painfully slow. I shut down the other VM's, rebooted the server and started the restore process again. It took 32 hours!!! to complete...

restore vma archive: lzop -d -c /mnt/px1md8/dump/vzdump-qemu-203-2013_12_27-09_09_30.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp3929.fifo - /var/tmp/vzdumptmp3929
CFG: size: 398 name: qemu-server.conf
DEV: dev_id=1 size: 107374182400 devname: drive-ide0
DEV: dev_id=2 size: 107374182400 devname: drive-virtio0
CTIME: Fri Dec 27 09:09:31 2013
Formatting '/mnt/px1md2/images/203/vm-203-disk-1.vmdk', fmt=vmdk size=107374182400 compat6=off
new volume ID is 'px1md2:203/vm-203-disk-1.vmdk'
map 'drive-ide0' to '/mnt/px1md2/images/203/vm-203-disk-1.vmdk' (write zeros = 0)
Formatting '/mnt/px1md2/images/203/vm-203-disk-2.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off
new volume ID is 'px1md2:203/vm-203-disk-2.qcow2'
map 'drive-virtio0' to '/mnt/px1md2/images/203/vm-203-disk-2.qcow2' (write zeros = 0)
progress 1% (read 2147483648 bytes, duration 2273 sec)
progress 2% (read 4294967296 bytes, duration 4723 sec)
progress 3% (read 6442450944 bytes, duration 7163 sec)
progress 4% (read 8589934592 bytes, duration 9502 sec)
progress 5% (read 10737418240 bytes, duration 11790 sec)
progress 6% (read 12884901888 bytes, duration 14077 sec)
progress 7% (read 15032385536 bytes, duration 16354 sec)
progress 8% (read 17179869184 bytes, duration 18632 sec)
progress 9% (read 19327352832 bytes, duration 20919 sec)
progress 10% (read 21474836480 bytes, duration 23205 sec)
progress 11% (read 23622320128 bytes, duration 25503 sec)
progress 12% (read 25769803776 bytes, duration 27819 sec)
progress 13% (read 27917287424 bytes, duration 30139 sec)
progress 14% (read 30064771072 bytes, duration 32482 sec)
progress 15% (read 32212254720 bytes, duration 34821 sec)
progress 16% (read 34359738368 bytes, duration 37162 sec)
progress 17% (read 36507222016 bytes, duration 39504 sec)
progress 18% (read 38654705664 bytes, duration 41430 sec)
progress 19% (read 40802189312 bytes, duration 42882 sec)
progress 20% (read 42949672960 bytes, duration 44144 sec)
progress 21% (read 45097156608 bytes, duration 46350 sec)
progress 22% (read 47244640256 bytes, duration 48686 sec)
progress 23% (read 49392123904 bytes, duration 51043 sec)
progress 24% (read 51539607552 bytes, duration 53407 sec)
progress 25% (read 53687091200 bytes, duration 55774 sec)
progress 26% (read 55834574848 bytes, duration 58145 sec)
progress 27% (read 57982058496 bytes, duration 60520 sec)
progress 28% (read 60129542144 bytes, duration 62918 sec)
progress 29% (read 62277025792 bytes, duration 65312 sec)
progress 30% (read 64424509440 bytes, duration 67709 sec)
progress 31% (read 66571993088 bytes, duration 70103 sec)
progress 32% (read 68719476736 bytes, duration 72496 sec)
progress 33% (read 70866960384 bytes, duration 74883 sec)
progress 34% (read 73014444032 bytes, duration 77284 sec)
progress 35% (read 75161927680 bytes, duration 79701 sec)
progress 36% (read 77309411328 bytes, duration 82104 sec)
progress 37% (read 79456894976 bytes, duration 84471 sec)
progress 38% (read 81604378624 bytes, duration 86843 sec)
progress 39% (read 83751862272 bytes, duration 89231 sec)
progress 40% (read 85899345920 bytes, duration 91586 sec)
progress 41% (read 88046829568 bytes, duration 93946 sec)
progress 42% (read 90194313216 bytes, duration 96299 sec)
progress 43% (read 92341796864 bytes, duration 98661 sec)
progress 44% (read 94489280512 bytes, duration 101011 sec)
progress 45% (read 96636764160 bytes, duration 103377 sec)
progress 46% (read 98784247808 bytes, duration 105744 sec)
progress 47% (read 100931731456 bytes, duration 108128 sec)
progress 48% (read 103079215104 bytes, duration 110504 sec)
progress 49% (read 105226698752 bytes, duration 112885 sec)
progress 50% (read 107374182400 bytes, duration 115267 sec)
progress 51% (read 109521666048 bytes, duration 115269 sec)
progress 52% (read 111669149696 bytes, duration 115269 sec)
progress 53% (read 113816633344 bytes, duration 115270 sec)
progress 54% (read 115964116992 bytes, duration 115270 sec)
progress 55% (read 118111600640 bytes, duration 115270 sec)
progress 56% (read 120259084288 bytes, duration 115270 sec)
progress 57% (read 122406567936 bytes, duration 115270 sec)
progress 58% (read 124554051584 bytes, duration 115270 sec)
progress 59% (read 126701535232 bytes, duration 115270 sec)
progress 60% (read 128849018880 bytes, duration 115270 sec)
progress 61% (read 130996502528 bytes, duration 115271 sec)
progress 62% (read 133143986176 bytes, duration 115271 sec)
progress 63% (read 135291469824 bytes, duration 115271 sec)
progress 64% (read 137438953472 bytes, duration 115271 sec)
progress 65% (read 139586437120 bytes, duration 115271 sec)
progress 66% (read 141733920768 bytes, duration 115272 sec)
progress 67% (read 143881404416 bytes, duration 115272 sec)
progress 68% (read 146028888064 bytes, duration 115272 sec)
progress 69% (read 148176371712 bytes, duration 115273 sec)
progress 70% (read 150323855360 bytes, duration 115278 sec)
progress 71% (read 152471339008 bytes, duration 115283 sec)
progress 72% (read 154618822656 bytes, duration 115296 sec)
progress 73% (read 156766306304 bytes, duration 115311 sec)
progress 74% (read 158913789952 bytes, duration 115326 sec)
progress 75% (read 161061273600 bytes, duration 115339 sec)
progress 76% (read 163208757248 bytes, duration 115360 sec)
progress 77% (read 165356240896 bytes, duration 115381 sec)
progress 78% (read 167503724544 bytes, duration 115395 sec)
progress 79% (read 169651208192 bytes, duration 115412 sec)
progress 80% (read 171798691840 bytes, duration 115423 sec)
progress 81% (read 173946175488 bytes, duration 115428 sec)
progress 82% (read 176093659136 bytes, duration 115430 sec)
progress 83% (read 178241142784 bytes, duration 115434 sec)
progress 84% (read 180388626432 bytes, duration 115453 sec)
progress 85% (read 182536110080 bytes, duration 115486 sec)
progress 86% (read 184683593728 bytes, duration 115514 sec)
progress 87% (read 186831077376 bytes, duration 115543 sec)
progress 88% (read 188978561024 bytes, duration 115579 sec)
progress 89% (read 191126044672 bytes, duration 115603 sec)
progress 90% (read 193273528320 bytes, duration 115628 sec)
progress 91% (read 195421011968 bytes, duration 115662 sec)
progress 92% (read 197568495616 bytes, duration 115689 sec)
progress 93% (read 199715979264 bytes, duration 115720 sec)
progress 94% (read 201863462912 bytes, duration 115723 sec)
progress 95% (read 204010946560 bytes, duration 115723 sec)
progress 96% (read 206158430208 bytes, duration 115724 sec)
progress 97% (read 208305913856 bytes, duration 115724 sec)
progress 98% (read 210453397504 bytes, duration 115724 sec)
progress 99% (read 212600881152 bytes, duration 115724 sec)
progress 100% (read 214748364800 bytes, duration 115724 sec)
total bytes read 214748364800, sparse bytes 64854679552 (30.2%)
space reduction due to 4K zero bocks 0.152%
TASK OK

File transfer between the same two RAID arrays goes smoothly at about 90MB/s.

What am I doing wrong?

Thank you.
 
Hello everyone,

I am running Proxmox 3.1 on a Dual Xeon Server with 16 HDDs arranged in 8 x RAID 1 arrays using mdadm.

A few days ago I backed up a KVM VM (2 x 100GB disks, full back-up using LZO), before making some changes to it. The VM and the back-up are on separate RAID arrays. The back-up went smoothly and took, as usual, ~ 30 min to complete. A few hours later I had to restore the VM and I noticed it was painfully slow. I shut down the other VM's, rebooted the server and started the restore process again. It took 32 hours!!! to complete...



File transfer between the same two RAID arrays goes smoothly at about 90MB/s.

What am I doing wrong?

Thank you.

Try dd if=/dev/zero of=testfile bs=1M count=10000. Keep in mind that software raid is not supported.
 
dd if=/dev/zero of=testfile bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 68.2237 s, 154 MB/s

dd if=/dev/zero of=testfile bs=1M count=10000 oflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 66.8867 s, 157 MB/s

hdparm -tT /dev/md1

/dev/md1
Timing cached reads: 16028 MB in 2.00 seconds = 8021.15 MB/sec
Timing buffered disk reads: 480 MB in 3.01 seconds = 159.42 MB/sec

hdparm -tT /dev/md8

/dev/md8
Timing cached reads: 16140 MB in 2.00 seconds = 8077.22 MB/sec
Timing buffered disk reads: 444 MB in 3.01 seconds = 147.56 MB/sec

VM in stored on md1, backups on md8.

giner said:
Keep in mind that software raid is not supported.
I know that, but I trust mdadm over any hardware controller.

If I extract the *.lzo file => then extract the *.vma file manually, the whole process takes only minutes, as you'd expect.
 
Last edited:
dd if=/dev/zero of=testfile bs=1M count=10000


dd if=/dev/zero of=testfile bs=1M count=10000 oflag=direct


hdparm -tT /dev/md1



hdparm -tT /dev/md8



VM in stored on md1, backups on md8.


I know that, but I trust mdadm over any hardware controller.

If I extract the *.lzo file => then extract the *.vma file manually, the whole process takes only minutes, as you'd expect.

Software RAID is quite okay when it is used by one kernel (single OS or OpenVZ), however it is not so good with full virtualization because lack of writing cache. May be some tuning of MD can help but hardware RAID is always faster and safer in case of power failure.
 
> If I extract the *.lzo file => then extract the *.vma file manually, the whole process takes only minutes, as you'd expect.
Check "ps aux" when restoring and try the same command manually.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!