could not activate storage 'pool-2tb', zfs error: cannot import 'pool-2tb': one or more devices is c

dobromin

Member
Nov 27, 2017
9
0
21
39
Help
created the cluster. hooked up a zfs disk on node 2 when viewing the disk on node 1 error.

could not activate storage 'pool-2tb', zfs error: cannot import 'pool-2tb': one or more devices is currently unavailable (500)

KvcRMfpmROW3VIX-AWsw2Q.png


nod1 - pve2 - 192.168.168.240
nod2 - pve1 - 192.168.168.250
#create claster nod1
pvecm create myclaster
#create claster nod2
pvecm add nod1
root@pve2:/# pvecm status
Quorum information
------------------
Date: Mon Nov 27 14:09:07 2017
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000002
Ring ID: 2/556
Quorate: Yes

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000002 1 192.168.168.240 (local)
0x00000001 1 192.168.168.250

root@pve1:/# pvecm status
Quorum information
------------------
Date: Mon Nov 27 14:47:40 2017
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 2/556
Quorate: Yes

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000002 1 192.168.168.240
0x00000001 1 192.168.168.250 (local)

help
 
Is the pool pool-2tb on node pve1 or node pve2? Post the output of
Code:
zpool status
from both nodes.
 
Is the pool pool-2tb on node pve1 or node pve2? Post the output of
Code:
zpool status
from both nodes.

pve1

root@pve1:/# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool-2tb 1.81T 379G 1.44T - 6% 20% 1.00x ONLINE -
rpool 928G 724G 204G - 54% 77% 1.00x ONLINE -

root@pve2:/# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 444G 1.11G 443G - 0% 0% 1.00x ONLINE -

root@pve2:/# pvecm status
Quorum information
------------------
Date: Mon Nov 27 14:09:07 2017
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000002
Ring ID: 2/556
Quorate: Yes

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000002 1 192.168.168.240 (local)
0x00000001 1 192.168.168.250

root@pve1:/# pvecm status
Quorum information
------------------
Date: Mon Nov 27 14:47:40 2017
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 2/556
Quorate: Yes

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000002 1 192.168.168.240
0x00000001 1 192.168.168.250 (local)
 
According to your zpool output, the pool-2tb is created on pve1, so there is no pool-2tb in pve2.
On pve interface, under Datacenter - Storage, edit the pool-2tb and on the Nodes dropdown, choose pve1
 
According to your zpool output, the pool-2tb is created on pve1, so there is no pool-2tb in pve2.
On pve interface, under Datacenter - Storage, edit the pool-2tb and on the Nodes dropdown, choose pve1
Thank you very much! ;)