[SOLVED] Restore VM (from qcow2 PVE3 to Upgarded Server PVE4 LVM Thin) very slow

fireon

Distinguished Member
Oct 25, 2010
4,136
390
153
42
Austria/Graz
iteas.at
Hello,

i'am work at an upgrade from PVE3 to PVE4. Upgrade was done, so i reimport the VMs. The Sourceformat is qcow2 with LZO compression. But the import is soooooooooo slow. (1,7MB/s). The Load is on 50, CPU about 2%.
This is the first time that we migrate from PVE3 to 4 with LVM Thin. The normal Copyprocess was 50-70mb/s. It is an HP ML350 G5 with 6xSATA in Raid10.

Can i do something to accelerate this up?

Thanks!
 
Hard to say without detailed information about your disks and storage setup. Maybe something wrong with the RAID?

What is the output of:

# pveperf

And how do you import the data exactly?
 
I import the Data from the Webinterface to local thin lvm. When i import the same vzdump to local (Directory) it is fast 115MB/s.
When i backup the first imported VM from LVM Thin it copies over NFS to the Backupserver with about 70MB/s. But when i restore this Backup back to local Thin LVM with another ID for testing it is again really slow, about 1,5MB/s and i/O 60%. When i stop the import, the VMA process is running through and the only way to stop is to reboot the Server.
But when i stop the import to local (Directory) it stops normaly.

The Local and Local Thin LVM is the same Raid, the same volumegroup. I created the LVM Thin with this command:
Code:
lvcreate -L 1200G --thinpool data pve
The Grub has a special option for the old G5 Server, because without it is not able to boot. So we cant reinstall with 4.3. because the installer can't find the raid. Not tested with 4.3, but that was so since 4.0,4.1,4.2.
Code:
GRUB_CMDLINE_LINUX_DEFAULT="hpsa.hpsa_simple_mode=1 hpsa.hpsa_allow_any=1 quiet"
Maybe it works with new 4.3 also without this options, but i can't test because the old ILO2 works not for edit files, no arrow keys, no special keys. And the server is 1,5h far from the office.
Code:
PV  VG  Fmt  Attr PSize PFree 
/dev/cciss/c0d0p2 pve  lvm2 a--  1.36t 92.54g

/var/lib/vz/ vgs
VG  #PV #LV #SN Attr  VSize VFree 
pve  1  5  0 wz--n- 1.36t 92.54g

/var/lib/vz/ lvs
LV  VG  Attr  LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
data  pve  twi-aotz--  1.17t  7.28  3.94 
root  pve  -wi-ao----  96.00g 
swap  pve  -wi-ao----  8.00g 
vm-104-disk-1 pve  Vwi-a-tz--  90.00g data  89.81 
vm-400-disk-1 pve  Vwi-aotz-- 200.00g data  0.86
pveperf:
Code:
CPU BOGOMIPS:  18668.36 
REGEX/SECOND:  930696 
HD SIZE:  94.37 GB (/dev/dm-0) 
BUFFERED READS:  168.93 MB/sec 
AVERAGE SEEK TIME: 8.34 ms 
FSYNCS/SECOND:  48.76 
DNS EXT:  49.47 ms
 
Last edited:
Tomorrow i'am on site. I try to setup the server with PVE 4.3 USBstick. Maybe it is working. The server is running since... i thing PVE 1.X :)
 
I've installed the server new with PVE4.3. Installed successfully :) But the same problem with restore an VM ;( What else can I do? Is it possible that LVM Thin is not compatible with old HP Gen5 Serverraid?
 
Hello,

i'am work at an upgrade from PVE3 to PVE4. Upgrade was done, so i reimport the VMs. The Sourceformat is qcow2 with LZO compression. But the import is soooooooooo slow. (1,7MB/s). The Load is on 50, CPU about 2%.
This is the first time that we migrate from PVE3 to 4 with LVM Thin. The normal Copyprocess was 50-70mb/s. It is an HP ML350 G5 with 6xSATA in Raid10.

Can i do something to accelerate this up?

Thanks!

I too faced lots of issues which seemed to be due to LVM Thin:
1. Slow (reaaallly slow restores)
2. Starting the VM's console would timeout (IO was 8-12%, usually it is under 1%)

I quickly gave up on this and would be deleting the LVM Thin and using the "classic LVM".

I know LVM Thin is the future (so to speak) but I think I'll revisit LVM Thin on Proxmox later! ;)

Cheers,
Shantanu
 
FSYNCS/SECOND: 48.76
HP ML350 G5 with 6xSATA in Raid10.

It looks like your raid cache does not work correctly, as with a working raid cache you should have a few 1000 fsyncs/second

LVM thin can use about half of the performance of the native write speed to the RAID on a restore, as on a restore it has to allocate everything and write zeros to it and then write the data which is not already zero.
It may sync after each zero block (4MB) so more fsyncs/s should help here.
 
Changed the cache modul from 64bm to 128mb, but battery is death. Raid10 new build. Ok, test import. The import is much more better. 15MB/s
But the same when i stop the importprocess VMA is hanging and w/a is high
Code:
root@srv-virtu01:~# ps ax | grep vma 
1818 ?  D  0:20 [vma]
Code:
Tasks: 262 total,  2 running, 260 sleeping,  0 stopped,  0 zombie 
%Cpu(s):  0.3 us,  0.9 sy,  0.0 ni, 50.5 id, 47.7 wa,  0.0 hi,  0.5 si,  0.0 st 
KiB Mem:  25719804 total,  6450716 used, 19269088 free,  4851892 buffers 
KiB Swap:  4194300 total,  0 used,  4194300 free.  491412 cached Mem
The only way is to reboot the server.

So I'am right when i say Thin LVM need a cache that it can working fine? Us new servers they have about 2GB cache. On Server we imported VMs. This has worked fine 115MB/s. New pveperftest:
Code:
CPU BOGOMIPS:  18668.00 
REGEX/SECOND:  911086 
HD SIZE:  49.09 GB (/dev/dm-0) 
BUFFERED READS:  185.96 MB/sec 
AVERAGE SEEK TIME: 7.57 ms 
FSYNCS/SECOND:  569.27 
DNS EXT:  45.56 ms
 
So, battery changed, cache read and write enabled and the pveperf looks like this:
Code:
FSYNCS/SECOND:  1319.39
Import does work now better. To 9% with 115MB/s after that between 15 to 30mb/s. And W/A with about 50-70%
It will not get any better, i think.

Code:
restore vma archive: lzop -d -c /mnt/pve/sicherung/dump/vzdump-qemu-106-2016_lampi_15-11_08_42.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp32638.fifo - /var/tmp/vzdumptmp32638
CFG: size: 511 name: qemu-server.conf
DEV: dev_id=1 size: 75161927680 devname: drive-virtio0
CTIME: Tue Nov 15 11:08:43 2016
Logical volume "vm-106-disk-1" created.
new volume ID is 'local-lvm:vm-106-disk-1'
map 'drive-virtio0' to '/dev/pve/vm-106-disk-1' (write zeros = 0)
progress 1% (read 751632384 bytes, duration 3 sec)
progress 2% (read 1503264768 bytes, duration 7 sec)
progress 3% (read 2254897152 bytes, duration 11 sec)
progress 4% (read 3006529536 bytes, duration 15 sec)
progress 5% (read 3758096384 bytes, duration 18 sec)
progress 6% (read 4509728768 bytes, duration 22 sec)
progress 7% (read 5261361152 bytes, duration 51 sec)
progress 8% (read 6012993536 bytes, duration 75 sec)
progress 9% (read 6764625920 bytes, duration 105 sec)
progress 10% (read 7516192768 bytes, duration 156 sec)
progress 11% (read 8267825152 bytes, duration 175 sec)
progress 12% (read 9019457536 bytes, duration 198 sec)
progress 13% (read 9771089920 bytes, duration 221 sec)
progress 14% (read 10522722304 bytes, duration 243 sec)
progress 15% (read 11274289152 bytes, duration 306 sec)
progress 16% (read 12025921536 bytes, duration 317 sec)
progress 17% (read 12777553920 bytes, duration 334 sec)
progress 18% (read 13529186304 bytes, duration 357 sec)
progress 19% (read 14280818688 bytes, duration 380 sec)
progress 20% (read 15032385536 bytes, duration 402 sec)
progress 21% (read 15784017920 bytes, duration 459 sec)
progress 22% (read 16535650304 bytes, duration 495 sec)
progress 23% (read 17287282688 bytes, duration 515 sec)
progress 24% (read 18038915072 bytes, duration 538 sec)
progress 25% (read 18790481920 bytes, duration 564 sec)
progress 26% (read 19542114304 bytes, duration 613 sec)
progress 27% (read 20293746688 bytes, duration 640 sec)
progress 28% (read 21045379072 bytes, duration 662 sec)
progress 29% (read 21797011456 bytes, duration 684 sec)
progress 30% (read 22548578304 bytes, duration 707 sec)
progress 31% (read 23300210688 bytes, duration 784 sec)
progress 32% (read 24051843072 bytes, duration 806 sec)
progress 33% (read 24803475456 bytes, duration 827 sec)
progress 34% (read 25555107840 bytes, duration 850 sec)
progress 35% (read 26306674688 bytes, duration 873 sec)
progress 36% (read 27058307072 bytes, duration 945 sec)
progress 37% (read 27809939456 bytes, duration 974 sec)
progress 38% (read 28561571840 bytes, duration 994 sec)
progress 39% (read 29313204224 bytes, duration 1017 sec)
progress 40% (read 30064771072 bytes, duration 1040 sec)
progress 41% (read 30816403456 bytes, duration 1107 sec)
progress 42% (read 31568035840 bytes, duration 1132 sec)
progress 43% (read 32319668224 bytes, duration 1154 sec)
progress 44% (read 33071300608 bytes, duration 1177 sec)
progress 45% (read 33822867456 bytes, duration 1199 sec)
progress 46% (read 34574499840 bytes, duration 1222 sec)
progress 47% (read 35326132224 bytes, duration 1301 sec)
progress 48% (read 36077764608 bytes, duration 1320 sec)
progress 49% (read 36829396992 bytes, duration 1343 sec)
progress 50% (read 37580963840 bytes, duration 1365 sec)
progress 51% (read 38332596224 bytes, duration 1427 sec)
progress 52% (read 39084228608 bytes, duration 1448 sec)
progress 53% (read 39835860992 bytes, duration 1484 sec)
progress 54% (read 40587493376 bytes, duration 1504 sec)
progress 55% (read 41339060224 bytes, duration 1527 sec)
progress 56% (read 42090692608 bytes, duration 1595 sec)
progress 57% (read 42842324992 bytes, duration 1615 sec)
progress 58% (read 43593957376 bytes, duration 1651 sec)
progress 59% (read 44345589760 bytes, duration 1671 sec)
progress 60% (read 45097156608 bytes, duration 1693 sec)
progress 61% (read 45848788992 bytes, duration 1716 sec)
progress 62% (read 46600421376 bytes, duration 1783 sec)
progress 63% (read 47352053760 bytes, duration 1816 sec)
progress 64% (read 48103686144 bytes, duration 1836 sec)
progress 65% (read 48855252992 bytes, duration 1858 sec)
progress 66% (read 49606885376 bytes, duration 1880 sec)
progress 67% (read 50358517760 bytes, duration 1940 sec)
progress 68% (read 51110150144 bytes, duration 1966 sec)
progress 69% (read 51861782528 bytes, duration 1986 sec)
progress 70% (read 52613349376 bytes, duration 2006 sec)
progress 71% (read 53364981760 bytes, duration 2028 sec)
progress 72% (read 54116614144 bytes, duration 2051 sec)
progress 73% (read 54868246528 bytes, duration 2129 sec)
progress 74% (read 55619878912 bytes, duration 2151 sec)
progress 75% (read 56371445760 bytes, duration 2175 sec)
progress 76% (read 57123078144 bytes, duration 2198 sec)
progress 77% (read 57874710528 bytes, duration 2257 sec)
progress 78% (read 58626342912 bytes, duration 2280 sec)
progress 79% (read 59377975296 bytes, duration 2317 sec)
progress 80% (read 60129542144 bytes, duration 2337 sec)
progress 81% (read 60881174528 bytes, duration 2360 sec)
progress 82% (read 61632806912 bytes, duration 2382 sec)
progress 83% (read 62384439296 bytes, duration 2446 sec)
progress 84% (read 63136071680 bytes, duration 2482 sec)
progress 85% (read 63887638528 bytes, duration 2502 sec)
progress 86% (read 64639270912 bytes, duration 2525 sec)
progress 87% (read 65390903296 bytes, duration 2548 sec)
progress 88% (read 66142535680 bytes, duration 2611 sec)
progress 89% (read 66894168064 bytes, duration 2632 sec)
progress 90% (read 67645734912 bytes, duration 2666 sec)
progress 91% (read 68397367296 bytes, duration 2687 sec)
progress 92% (read 69148999680 bytes, duration 2710 sec)
progress 93% (read 69900632064 bytes, duration 2732 sec)
progress 94% (read 70652264448 bytes, duration 2801 sec)
progress 95% (read 71403831296 bytes, duration 2834 sec)
progress 96% (read 72155463680 bytes, duration 2855 sec)
progress 97% (read 72907096064 bytes, duration 2877 sec)
progress 98% (read 73658728448 bytes, duration 2899 sec)
progress 99% (read 74410360832 bytes, duration 2970 sec)
progress 100% (read 75161927680 bytes, duration 3001 sec)
total bytes read 75161927680, sparse bytes 1263796224 (1.68%)
space reduction due to 4K zero blocks 0.38%
TASK OK
So when i copy in this VM with NFS a big file i copy only with about 35-40MB/s but on local PVE root with 115/MB/s and i think with qcow2 too, but not tested.
 

Attachments

  • Screenshot_20161118_223840.png
    Screenshot_20161118_223840.png
    110.9 KB · Views: 6
Last edited:
And the last tests. I've tested same vzudmp with three diffrent storagetypes with same hardware and situation.
Code:
LVM recover: 12 Minuten
LVM backup: 25 Minuten

QCOW2 recover: 12 Minuten
QCOW2 backup: 25 Minuten

LVM thin recover: 52 Minuten
LVM thin backup: 25 Minuten
LVM Thin is 4.3x slower then all other.
 
So, battery changed, cache read and write enabled and the pveperf looks like this:
Code:
FSYNCS/SECOND: 1319.39
Import does work now better. To 9% with 115MB/s after that between 15 to 30mb/s. And W/A with about 50-70%
It will not get any better, i think.

This looks a lot better now!

LVM Thin is 4.3x slower then all other.

It has to be noted that this depends on the storage capability. As said, LVM Thin has to continuously allocate Blocks, write them to zero and sync them when writing a whole backup. More fsyncs/s help here a lot.
Once the storage is allocated this drawback has almost gone away.

In your situation using LVM will be better suited, if you do not use snapshots heavily or need over provisioning.
QCOW2 is also always an option.
 
Yes, thank you, not so easy ;) I've setuped this one server with normal LVM as storage beacause part of a cluster with lvm thin. So it is better for the migration beteween hosts.

Fazit: A check with pveperf on old server is recommended.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!