can't determine assigned storage

bhavicp

New Member
Nov 27, 2008
11
0
1
Hi,

I'm trying to migrate a VM from one node to another (clustered) and I get the following:

Code:
root@vm3:~# pvectl migrate 103 vm1 --onlineFeb 25 16:25:53 ERROR: migration aborted (duration 00:00:00): can't determine assigned storage
migration aborted

The VPS is on a local storage (/disk2/)

I also tried vzmigrate, but I get the error that the .conf file exists on the destination node, even though it doesn't.

Code:
proxmox-ve-2.6.32: 3.1-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.1-43 (running version: 3.1-43/1d4b0dfb)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-13
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1


Second node (I'm migrating to)
Code:
proxmox-ve-2.6.32: 3.1-121 (running kernel: 2.6.32-26-pve)pve-manager: 3.1-43 (running version: 3.1-43/1d4b0dfb)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-13
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
 
Last edited:
Also with these VM's, I cannot stop and start them via the GUI. They are somehow getting the wrong 'storage' (using the ones from the other node). I've verfied the storage is setup properly and only allocated to the host it is on, so I'm not sure how it got mixed up. Is it possible to re-write/configure the storage the VM is using? Most other VM's on the node are fine, it is just 3 which are having this issue.
 
You need to define a storage in /etc/pve/storage.cfg which point to that directory (/disk2/...)

This exists on the node (I created using the webUI). Do you know why it is picking up the wrong disk? If I start manually using vzctl start, it starts fine, but via the webUI and API it complains for sdc1, which is on the other node. This VPS is on sdc

HTML:
croot@vm3:~# cat /etc/pve/storage.cfgdir: local        path /var/lib/vz        content vztmpl,rootdir        maxfiles 0
dir: sdb        path /disk2/        content rootdir        maxfiles 0        nodes vm3

dir: sdc        path /disk3/        content rootdir        maxfiles 0        nodes vm3
dir: sdd        path /disk4/        content rootdir        maxfiles 0        nodes vm3
dir: xtemplates        path /var/lib/vz/xtemplates        shared        content iso,vztmpl        maxfiles 0
dir: sdb1        path /disk2/        content rootdir        maxfiles 1        nodes vm1
dir: sdc1        path /disk3        content rootdir        maxfiles 1        nodes vm1