[solved] openvz livemigration fails on pve2.2 (second local storage)

udo

Distinguished Member
Apr 22, 2009
5,981
204
163
Ahrensburg; Germany
Hi,
just updated an 3-node-cluster and now I can't live-migrate an VZ back.

Error message:
Code:
Dec 28 16:23:43 ERROR: migration aborted (duration 00:00:01): can't determine assigned storage
TASK ERROR: migration aborted
Version:
Code:
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-71
pve-firmware: 1.0-21
libpve-common-perl: 1.0-40
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1
The storage, on which the VZ is:
Code:
dir: local_pve
        path /mnt/local_pve
        content images,vztmpl,rootdir
        maxfiles 1
        nodes proxmox3,proxmox1,proxmox2
VZ-Config:
Code:
ONBOOT="no"

PHYSPAGES="0:768M"
SWAPPAGES="0:512M"
KMEMSIZE="349M:384M"
DCACHESIZE="174M:192M"
LOCKEDPAGES="384M"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
OOMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
NUMSIGINFO="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
NUMFILE="unlimited"
NUMIPTENT="unlimited"

# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="6G:6920601"
DISKINODES="1200000:1320000"
QUOTATIME="0"
QUOTAUGIDLIMIT="0"

# CPU fair scheduler parameter
CPUUNITS="1000"
CPUS="1"
HOSTNAME="ftp-proxy.domain.com"
SEARCHDOMAIN="domain.com"
NAMESERVER="10.10.3.30"
NETIF="ifname=eth0,mac=DA:E2:E0:42:8A:C1,host_ifname=veth202.0,host_mac=96:2C:7F:3B:F9:FC,bridge=vmbr23"
VE_ROOT="/var/lib/vz/root/$VEID"
VE_PRIVATE="/mnt/local_pve/private/202"
OSTEMPLATE="debian-6.0-standard_6.0-4_i386.tar.gz"
Any hints?

Udo
 
Last edited:
Re: openvz livemigration fails on pve2.2 (second local storage)

You are the second one reporting such behavior - please can you file a bug at bugzilla.proxmox.com?
 
Re: openvz livemigration fails on pve2.2 (second local storage)

Hi,
I will do that. BTW. the same happens if I try an offline migration.

Udo
Hi,
canceld the creation of an bug inside bugtracker becaue I found the issue (of course selfmade).
I had an 3 node cluster and node1+2 has an extra filesystem local_pve - on node3 this don't excist and I create an softlink to the directory /var/lib/vz/local_pve and define the storage also for node3. After that I updated the node3 to pve2.2.
I can succesfull migrate an CT to node3. After that I updated node1+2 and wan't migrate the CTs back, which don't work (issue like described above).

I had some free space on the raidset, so I create an new volume on node3, transfer the data below the solflink to this volume and mount this on the right position.
After that the migration back work without problems!

But if I can migrate to a node, I should also be able to migrate back, or not?

Udo
 
Re: openvz livemigration fails on pve2.2 (second local storage)

Sorry, I don't really understand the issue?
Hi Dietmar,
I mean with an local storage as soft link to an dir to another localstorage on node 3 I'm able to migrate an CT from node 2 to node 3, but if i try to migrate back, it's not possible.

Of course, if I use an real own filesystem all work fine.

Udo