Restore VM slowdown Host

pblou

Well-Known Member
Jan 9, 2017
32
0
46
35
Hi there,

I'm using Proxmox 5.1 with 1 To lvm-thin storage for VM.

I see that the host and active VM are slow down when I restore a VM from a external NFS server.

is there any solution for slowdown restore I/O or where is some bug or configuration error on my installation ?

thks for help.

PLou
 
If you have an old server like HP Gen5, the only way is to change the storage to directory. So delete LVM-Thin, add normal Partition or LVM, mount the LVM in the fstab and add an directory for your VM's for QCOW2.

What says "pveperf /path/to/your/storage".
 
Hello fireon,

thks for your reply.

The serveur is not old ;)
It's a DELL CPU(s) 32 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (2 Sockets) 128Go of memory. 1To SSD Perc Raid 5

pveperf say:
CPU BOGOMIPS: 134244.80
REGEX/SECOND: 2782770
HD SIZE: 93.99 GB (/dev/mapper/pve-root)
BUFFERED READS: 562.88 MB/sec
AVERAGE SEEK TIME: 0.24 ms
FSYNCS/SECOND: 8258.64
DNS EXT: 21.66 ms
DNS INT: 6.57 ms

when I restore a VM from a NFS storage, the host is slow (running VM and same if I connect to him in SSH) It's not blocked but slowdown.

I/O performance too high ?

Plou
 
Your serverdatas are perfectly :) very good fsync. Did you have some logs for the bad backuptime?
How many SSD? This are original Enterprise SSD's from Dell? Firmeware up do date?
 
Hi,

It's not a backup or restore problem fireon ;)
the problem is when I restore a VM, then actives VM slowdown.

3 x SSD 500Go Perc Raid 5 (original SSD from Dell)

Plou
 
It's not a backup or restore problem fireon ;)
the problem is when I restore a VM, then actives VM slowdown.
Yes, i meant the bad restore time... please post the whole syslog for this process. And in another window only the errors.

Syslog:
Code:
journalctl -f

Syslog only important errors:
Code:
journalctl  -p3 -f

You can also do an reversesearch:
Code:
journalctl  -p3 -r

And at the command "top" the w/a.
 
Hello,

problem solved,
if I limit I/O by using cstream -t 25000000 in qmrestore command line, all active VM work normaly et not slowly !

thks for help fireon ;)

Pl
 
I have the same problem with restores. Only started since we moved from 3.4 to 4.x

Now I did notice their is a difference with LVM. LVM-thin is used. Is there maybe something there causing it?
 
I ran the following:

cstream -t 25000000 | qmrestore /mnt/pve/nfsvzjhb4/dump/vzdump-qemu-102-2017_12_06-19_54_59.vma.lzo 105

Is that 25MB/s

I running this but VMs go slow for me as I notice vma extract and lzop running and it takes 300+MB/s

Something I'm doing wrong?
 
you are probably looking for "cstream -t 25000000 -i /mnt/pve/nfsvzjhb4/dump/vzdump-qemu-102-2017_12_06-19_54_59.vma.lzo | lzop -cd | qmrestore - 105"
 
I have the same problem with restores. Only started since we moved from 3.4 to 4.x

Now I did notice their is a difference with LVM. LVM-thin is used. Is there maybe something there causing it?

Hello Fabian,

the problem for me is on Server with SSD disk (raid 5), I don't have this problem on sATA or SAS disk.

I'm using Proxmox 5.1

Pl
 
Is everyone having this issue using LVM? anyone with ZFS having restore issues?

I'm starting to think that is the issue. I setup a ZFS server and cant replicate it there. Very weird.
 
Last edited:
Is everyone having this issue using LVM? anyone with ZFS having restore issues?

I'm starting to think that is the issue. I setup a ZFS server and cant replicate it there. Very weird.

Hello,

For me, original problem is in ZFS not with LVM.

Pl
 
Now I'm lost. I never had this on proxmox 3.4. Even using cstream is still causing it.

So the only similary is SSD being used then. Strange.
 
I was thinking it would be SSDs or NFS. But I reinstalled the same server now with Proxmox 3.4. Restoring is perfect no load issues.

Didn't want to stay on 3.4 guess have to because of this issue.

I feel like I'm missing something but cannot see what that is.
 
I couldn't deal with the fact being on older version of Proxmox so setup a new server again but added a secondary Processor.

Problem I think is solved.

24 x Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz (2 Sockets)

I was restoring small VMs though and all looks better. Going to try larger ones later about 800GB or so and see if same thing occurs this evening.
 
Nope larger ones still a problem. Load goes sky high. Anyway for now while I do more testing going to stick to LVM on proxmox 3.4. It happens only on Proxmox versions using LVM-thin as default for me. I dont use ZFS only HW raid controllers on our servers and every server has same issue except the 3.4 ones. Next one I am going to try is not to use LVM-thin on proxmox newer versions and see if that works.
 
I got the same problem with Proxmox 5.1 on a HP DL360p G8 (2x10Core/192 GB/HW-RAID 10 of 8x 960GB SSD) with LVM-thin.

I did 2 restores at a time from an external NFS storage to local SSD-RAID. All other VMs slowed down, webservers were not reachable, the gui also was not responding.

After a time and killing 2 kvm VMs which were struggeling at 100% CPU I lost both of them: The logical volumes had been kind of overwritten - instead of the bootsector on the beginning of the LV there where user data like javasript code...

IMHO LVM-thin is not stable enough for production use as long as it is possible to kill foreign filesystems during a restore from the gui.

Does anyone got lvm-thin got up & running stable on SSD storage?
Anyone on SAS HDD-RAID?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!