PVE 2-Node-Cluster: Restore extremely slow. Why?

Dec 19, 2012
492
14
83
Hi.
(Maybe it's better to begin a new thread with this problem)
I just created a cluster of two nodes ( https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster )
Afterwards I wanted to restore a VM on one of the nodes. It took hours ...

Here is the log-file:
Code:
restore vma archive: lzop -d -c /var/lib/vz/dump/vzdump-qemu-108-2016_04_18-11_51_47.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp10785.fifo - /var/tmp/vzdumptmp10785
CFG: size: 519 name: qemu-server.conf
DEV: dev_id=1 size: 160055754752 devname: drive-ide0
CTIME: Mon Apr 18 11:51:50 2016
Formatting '/var/lib/vz/images/100/vm-100-disk-1.vmdk', fmt=vmdk size=160055754752 compat6=off
libust[10790/10790]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:375)
new volume ID is 'local:100/vm-100-disk-1.vmdk'
map 'drive-ide0' to '/var/lib/vz/images/100/vm-100-disk-1.vmdk' (write zeros = 0)
libust[10788/10788]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:375)

After that it's VERY slow ... ~4000 s for 2% .... that's not normal. What to do? Or how to get faster?
Code:
I terminated it finally:

progress 1% (read 1600585728 bytes, duration 1817 sec)
progress 2% (read 3201171456 bytes, duration 3880 sec)
progress 3% (read 4801691648 bytes, duration 6001 sec)
progress 4% (read 6402277376 bytes, duration 8011 sec)
progress 5% (read 8002797568 bytes, duration 9972 sec)
progress 6% (read 9603383296 bytes, duration 10287 sec)
progress 7% (read 11203903488 bytes, duration 12327 sec)

temporary volume 'local:100/vm-100-disk-1.vmdk' sucessfuly removed
TASK ERROR: command 'lzop -d -c /var/lib/vz/dump/vzdump-qemu-108-2016_04_18-11_51_47.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp4925.fifo - /var/tmp/vzdumptmp4925' failed: interrupted by signal

Further information:
Code:
pveversion -v
proxmox-ve: 4.1-45 (running kernel: 4.4.6-1-pve)
pve-manager: 4.1-30 (running version: 4.1-30/9e199213)
pve-kernel-4.4.6-1-pve: 4.4.6-45
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-41
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-69
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-13
pve-container: 1.0-59
pve-firewall: 2.0-24
pve-ha-manager: 1.0-27
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
fence-agents-pve: 4.0.20-1
One difference: "fence-agents-pve: 4.0.20-1" runs only on the second node -- but not on the first node where I created the cluster. Correct like this?

Status is ok!
Code:
Quorum information
------------------
Date:  Mon Apr 18 15:57:05 2016
Quorum provider:  corosync_votequorum
Nodes:  2
Node ID:  0x00000002
Ring ID:  52
Quorate:  Yes

Votequorum information
----------------------
Expected votes:  2
Highest expected: 2
Total votes:  2
Quorum:  2
Flags:  Quorate

Any hints? Thanks.
 
Hi,

I think you problem comes from you format.
vmdk is only for compatibly reason available and not for using it in productive.
you can try to extract your backup fiel directly with vma on the console.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!