Error with qmrestore in a script launch by crontab

bibax

New Member
May 3, 2013
28
0
1
Hi,

I have an error when I want to restore a vm with a script launch by crontab.
This script works well when I start it from command line but not in the crontab.
When I start it manually (# /bin/tcsh /ga/scripts/restore.tcsh -vms 101 -vmd 201), I don't have any error and all is fine!
When I set an entry in crontab like this : "40 11 * * 5 /bin/tcsh /ga/scripts/restore.tcsh -vms 101 -vmd 201" -> it doesn't work
I don't know why zcat can't work when it have been launch from crontab.

The script starts well but stop with is error :
restore vma archive: zcat /mnt/pve/backup/dump/vzdump-qemu-101-2013_09_04-00_01_02.vma.gz|vma extract -v -r /var/tmp/vzdumptmp381449.fifo - /var/tmp/vzdumptmp381449
CFG: size: 379 name: qemu-server.conf
DEV: dev_id=1 size: 53687091200 devname: drive-virtio0
CTIME: Wed Sep 4 00:01:04 2013
TASK ERROR: command 'zcat /mnt/pve/backup/dump/vzdump-qemu-101-2013_09_04-00_01_02.vma.gz|vma extract -v -r /var/tmp/vzdumptmp381449.fifo - /var/tmp/vzdumptmp381449' failed: got timeout

This is my script :
restorevm.tcsh
#!/bin/tcsh


if ($#argv < 1) then
/bin/echo "ERREUR SUR LE NOMBRE D'ARGUMENTS"
/bin/echo "usage : $0 -vms original vmid -vmd destination id"
exit 1
endif


# Parse Args
@ i = 1
while ($i <= $#argv)
switch($argv[$i])
case "-vms":
@ i++
set VMIDS = "$argv[$i]"
breaksw
case "-vmd":
@ i++
set VMIDD = "$argv[$i]"
breaksw
default:
breaksw
endsw
@ i++
end
echo $VMIDS
echo $VMIDD


# Exemple de commande finale : qmrestore -force 1 /mnt/pve/backup/dump/vzdump-qemu-101-2013_06_25-14_49_53.vma.lzo 201
set BACKUP = `ls -t /mnt/pve/backup/dump/vzdump*.vma.gz |grep $VMIDS |head -1`
echo $BACKUP
/usr/sbin/qmrestore -force 1 --storage local $BACKUP $VMIDD



Do you have any idea?

Thank you in advance
 
Hi,

This is output from
# pveversion -vpve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

Thanks
 
New element :

From command line, if I run : /bin/tcsh -x /ga/scripts/restore.tcsh -vms 101 -vmd 201 I have the problem
/bin/tcsh -x /ga/scripts/restore.tcsh -vms 101 -vmd 201
if ( 4 < 1 ) then
@ i = 1
while ( 1 < = 4 )
switch ( -vms )
@ i++
set VMIDS = 101
breaksw
@ i++
end
while ( 3 < = 4 )
switch ( -vmd )
@ i++
set VMIDD = 201
breaksw
@ i++
end
while ( 5 < = 4 )
echo 101
101
echo 201
201
set BACKUP = `ls -t /mnt/pve/backup/dump/vzdump*.vma.gz |grep $VMIDS |head -1`
head -1
grep 101
ls -t /mnt/pve/backup/dump/vzdump-qemu-101-2013_08_31-15_00_02.vma.gz /mnt/pve/backup/dump/vzdump-qemu-101-2013_09_04-00_01_02.vma.gz /mnt/pve/backup/dump/vzdump-qemu-102-2013_08_31-15_35_14.vma.gz /mnt/pve/backup/dump/vzdump-qemu-102-2013_09_04-00_37_19.vma.gz /mnt/pve/backup/dump/vzdump-qemu-103-2013_08_31-16_17_10.vma.gz /mnt/pve/backup/dump/vzdump-qemu-103-2013_09_04-01_16_52.vma.gz /mnt/pve/backup/dump/vzdump-qemu-104-2013_08_31-16_34_25.vma.gz /mnt/pve/backup/dump/vzdump-qemu-104-2013_09_04-01_33_45.vma.gz
echo /mnt/pve/backup/dump/vzdump-qemu-101-2013_09_04-00_01_02.vma.gz
/mnt/pve/backup/dump/vzdump-qemu-101-2013_09_04-00_01_02.vma.gz
/usr/sbin/qmrestore -force 1 --storage local /mnt/pve/backup/dump/vzdump-qemu-101-2013_09_04-00_01_02.vma.gz 201
restore vma archive: zcat /mnt/pve/backup/dump/vzdump-qemu-101-2013_09_04-00_01_02.vma.gz|vma extract -v -r /var/tmp/vzdumptmp385334.fifo - /var/tmp/vzdumptmp385334
CFG: size: 379 name: qemu-server.conf
DEV: dev_id=1 size: 53687091200 devname: drive-virtio0
CTIME: Wed Sep 4 00:01:04 2013
command 'zcat /mnt/pve/backup/dump/vzdump-qemu-101-2013_09_04-00_01_02.vma.gz|vma extract -v -r /var/tmp/vzdumptmp385334.fifo - /var/tmp/vzdumptmp385334' failed: got timeout


but if I run : /bin/tcsh /ga/scripts/restore.tcsh -vms 101 -vmd 201, it works well :
# /bin/tcsh /ga/scripts/restore.tcsh -vms 101 -vmd 201
101
201
/mnt/pve/backup/dump/vzdump-qemu-101-2013_09_04-00_01_02.vma.gz
restore vma archive: zcat /mnt/pve/backup/dump/vzdump-qemu-101-2013_09_04-00_01_02.vma.gz|vma extract -v -r /var/tmp/vzdumptmp383295.fifo - /var/tmp/vzdumptmp383295
CFG: size: 379 name: qemu-server.conf
DEV: dev_id=1 size: 53687091200 devname: drive-virtio0
CTIME: Wed Sep 4 00:01:04 2013
Formatting '/var/lib/vz/images/201/vm-201-disk-1.qcow2', fmt=qcow2 size=53687091200 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off
new volume ID is 'local:201/vm-201-disk-1.qcow2'
map 'drive-virtio0' to '/var/lib/vz/images/201/vm-201-disk-1.qcow2' (write zeros = 0)
progress 1% (read 536870912 bytes, duration 1 sec)
progress 2% (read 1073741824 bytes, duration 1 sec).........


Why?
 
Hi,

This is new pveversion :
# pveversion -v
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.0-20
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

I tried to run script but it seems to be random. The same line in ssh has the error and just after, no problem. I don't know what could cause this issue.
Nethertheless in crontab, it doesn't work....

Do I have to test something before restore vm?
Do you have any idea to solve this issue?
 
OK It's good now.
I set the rights to 777 on the script and it's good.
It was 755 before.

Thank you!