Migration of VM between nodes failed - could not activate storage 'local-zfs', zfs error: cannot imp

ioanv

Well-Known Member
Dec 11, 2014
47
4
48
Hi

I have cluster with 2 nodes version 5.1-35.
Ii wanted to reboot the 2 nodes so first I migrated all VMs from the first note to the second one and everything went fine. Then, I wanted to come back with all VMs to the first node. and here the problems started:

2018-01-19 11:00:13 starting migration of VM 105 to node 'prox3' (192.168.10.203)
2018-01-19 11:00:13 ERROR: Failed to sync data - could not activate storage 'local-zfs', zfs error: cannot import 'rpool': no such pool available
2018-01-19 11:00:13 aborting phase 1 - cleanup resources
2018-01-19 11:00:13 ERROR: migration aborted (duration 00:00:00): Failed to sync data - could not activate storage 'local-zfs', zfs error: cannot import 'rpool': no such pool available
TASK ERROR: migration aborted

On the command line on the second node (called prox 5):

root@prox5:~# zpool list
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.

root@prox5:~# /sbin/modprobe zfs

root@prox5:~# zpool list
no pools available

Any ideas?
 
please post the vm configs and the storage config (/etc/pve/storage.cfg)
 
root@prox5:~# more /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl
zfspool: local-zfs
pool rpool/data
content rootdir,images
sparse 1
nfs: ISO
export /volume1/ISOuri
path /mnt/pve/ISO
server 192.168.10.29
content iso
maxfiles 1
options vers=3
nfs: ragnar_nfs
export /volume36/ragnar
path /mnt/pve/ragnar_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: prox5bkp_freenas2
export /mnt/firstvol/prox3bkp
path /mnt/pve/prox5bkp_freenas2
server 192.168.10.22
content backup
maxfiles 1
options vers=3
nfs: asgard_nfs
export /volume28/asgard
path /mnt/pve/asgard_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: prox5bkp_freenas1
export /mnt/firstvol/prox3bkp_freenas
path /mnt/pve/prox5bkp_freenas1
server 192.168.10.21
content backup
maxfiles 1
options vers=3
nfs: freya_nfs
export /volume6/freya
path /mnt/pve/freya_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: njord_nfs
export /volume14/njord
path /mnt/pve/njord_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: berserker_nfs
export /volume23/berserker
path /mnt/pve/berserker_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: aun_nfs
export /volume40/aun
path /mnt/pve/aun_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: sif_nfs
export /volume41/sif5
path /mnt/pve/sif_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: trondheim_nfs
export /volume22/trondheim
path /mnt/pve/trondheim_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: munin_nfs
export /volume25/munin
path /mnt/pve/munin_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: mjollnir_nfs
export /volume26/mjollnir
path /mnt/pve/mjollnir_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: bifrost_nfs
export /volume29/bifrost
path /mnt/pve/bifrost_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: jotunheimr
export /volume30/jotunheimr
path /mnt/pve/jotunheimr
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: jotnar_nfs
export /volume37/jotnar
path /mnt/pve/jotnar_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: njorun_nfs
export /volume2/njorun
path /mnt/pve/njorun_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: ran_nfs
export /volume5/ran
path /mnt/pve/ran_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3

Capture.JPG
 
Also the cluster status:

root@prox5:~# pvecm status
Quorum information
------------------
Date: Mon Jan 22 10:45:14 2018
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000002
Ring ID: 1/64
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.10.203
0x00000002 1 192.168.10.205 (local)
 
do both nodes have a zpool 'rpool' ? if not please restrict local-zfs to the node where it applies, then this should work
 
do both nodes have a zpool 'rpool' ? if not please restrict local-zfs to the node where it applies, then this should work

Hi
Thanks for your reply. How do I do that?

Here is the output on the first node:

root@prox3:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 928G 2.21G 926G - 8% 0% 1.00x ONLINE -

root@prox3:~# more /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl
zfspool: local-zfs
pool rpool/data
content rootdir,images
sparse 1
nfs: ISO
export /volume1/ISOuri
path /mnt/pve/ISO
server 192.168.10.29
content iso
maxfiles 1
options vers=3
nfs: ragnar_nfs
export /volume36/ragnar
path /mnt/pve/ragnar_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: prox5bkp_freenas2
export /mnt/firstvol/prox3bkp
path /mnt/pve/prox5bkp_freenas2
server 192.168.10.22
content backup
maxfiles 1
options vers=3
nfs: asgard_nfs
export /volume28/asgard
path /mnt/pve/asgard_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: prox5bkp_freenas1
export /mnt/firstvol/prox3bkp_freenas
path /mnt/pve/prox5bkp_freenas1
server 192.168.10.21
content backup
maxfiles 1
options vers=3
nfs: freya_nfs
export /volume6/freya
path /mnt/pve/freya_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: njord_nfs
export /volume14/njord
path /mnt/pve/njord_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: berserker_nfs
export /volume23/berserker
path /mnt/pve/berserker_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: aun_nfs
export /volume40/aun
path /mnt/pve/aun_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: sif_nfs
export /volume41/sif5
path /mnt/pve/sif_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: trondheim_nfs
export /volume22/trondheim
path /mnt/pve/trondheim_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: munin_nfs
export /volume25/munin
path /mnt/pve/munin_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: mjollnir_nfs
export /volume26/mjollnir
path /mnt/pve/mjollnir_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: bifrost_nfs
export /volume29/bifrost
path /mnt/pve/bifrost_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: jotunheimr
export /volume30/jotunheimr
path /mnt/pve/jotunheimr
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: jotnar_nfs
export /volume37/jotnar
path /mnt/pve/jotnar_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: njorun_nfs
export /volume2/njorun
path /mnt/pve/njorun_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
nfs: ran_nfs
export /volume5/ran
path /mnt/pve/ran_nfs
server 192.168.20.28
content images
maxfiles 1
options vers=3
 
do both nodes have a zpool 'rpool' ? if not please restrict local-zfs to the node where it applies, then this should work

Got it. I've restricted the storage to the node that need it and now the migration is working. I just can't figure how this happened because I did not enabled it on purpose for both nodes.
 
Got it. I've restricted the storage to the node that need it and now the migration is working. I just can't figure how this happened because I did not enabled it on purpose for both nodes.

the default is "no node restriction"