Zpool woes

seithan

New Member
May 17, 2021
5
0
1
46
Hello, im new to proxmox and trying to setup a 3node HA setup (on a VMware running on Win10) as part of my educational practice.

Them 3 nodes have joined the same cluster, used a 2ndry virtual drive of 5GB on each with the same name to create a shared zpool. Created a VM on the 1st node, but selected to install it in the ZFS space (not sure if it was a good idea, but thought it might be easier to migrate when HA takes over), and tried to shutdown the Node1 where the VM was running to test HA. It didnt actualy migrate, and when the node1 came back online i was displayed "

[FAILED] Failed to start Import ZFS pool pool
See 'systemctl status zfs-import@pool.service' for details.
[FAILED] Failed to start Import ZFS pool zpool.
See 'systemctl status zfs-import@zpool.service' for details.

Code:
zfs-import@zpool.service - Import ZFS pool zpool
  Loaded: loaded (/lib/systemd/system/zfs-import@service; enabled; vendor preset: enabled)
  Active: failed (Result: exit-code) Since Sat 2021-05-22 19:31:46 EEST; 9min ago
  Docs:   man:zpool(8)
  Process:694 ExecStart=/sbin/zpool import -N -d /dev/disk/by-id -o cachefile=none zpool (code=exit)
  Main Pid:694 (code=exited, status=1/FAILURE)
 
  May 22 19:31:46 virtual1 systemd[1]: Starting Import ZFS pool zpool...
  May 22 19:31:46 virtual1 zpool[694]: cannot import 'zpool': no such pool available
  May 22 19:31:46 virtual1 systemd[1]: zfs-import@zpool.service: Main process exited, code=exited, sta
  May 22 19:31:46 virtual1 systemd[1]: Failed to start Import ZFS pool zpool.

Output from : df -h :

Filesystem Size Used Avail Use% Mounted on
udev 1.9g 0 1.9g 0% 0% /dev
tmpfs 391M 5.8M 386M 2% /run
/dev/mapper/pve-root 4.7G 2.3G 2.2G 52% /
tmpfs 2.0g 6.1M 2.0G 12% /dev/shm
tmps 5.0M 0 5.0M 0% /run/lock
tmps 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/fuse 30M 28k 30M 1% /etc/pve


Output from : zpool status :

no pools available



Sidenote to create zfs, i created a 5G disk for each of the 3 nodes, but in order to be able to "initialize GPT " from with the proxmox webvie, i had to fdisk /dev/sdb and "g", then it would let me do said procedure.
But on Node1 i ended up with /dev/sdb /dev/sdb1 /dev/sdb9 (not sure how).

If it makes even the slightest sense, any help is welcome!
 
Hi,
Hello, im new to proxmox and trying to setup a 3node HA setup (on a VMware running on Win10) as part of my educational practice.

Them 3 nodes have joined the same cluster, used a 2ndry virtual drive of 5GB on each with the same name to create a shared zpool. Created a VM on the 1st node, but selected to install it in the ZFS space (not sure if it was a good idea, but thought it might be easier to migrate when HA takes over), and tried to shutdown the Node1 where the VM was running to test HA. It didnt actualy migrate, and when the node1 came back online i was displayed "
HA needs shared storage, the only exception is if you have active ZFS replication between the nodes (in which case data since the last replication can be lost if a node fails).

[FAILED] Failed to start Import ZFS pool pool
See 'systemctl status zfs-import@pool.service' for details.
[FAILED] Failed to start Import ZFS pool zpool.
See 'systemctl status zfs-import@zpool.service' for details.

Code:
zfs-import@zpool.service - Import ZFS pool zpool
  Loaded: loaded (/lib/systemd/system/zfs-import@service; enabled; vendor preset: enabled)
  Active: failed (Result: exit-code) Since Sat 2021-05-22 19:31:46 EEST; 9min ago
  Docs:   man:zpool(8)
  Process:694 ExecStart=/sbin/zpool import -N -d /dev/disk/by-id -o cachefile=none zpool (code=exit)
  Main Pid:694 (code=exited, status=1/FAILURE)
 
  May 22 19:31:46 virtual1 systemd[1]: Starting Import ZFS pool zpool...
  May 22 19:31:46 virtual1 zpool[694]: cannot import 'zpool': no such pool available
  May 22 19:31:46 virtual1 systemd[1]: zfs-import@zpool.service: Main process exited, code=exited, sta
  May 22 19:31:46 virtual1 systemd[1]: Failed to start Import ZFS pool zpool.
Does not seem like pools with those names are available for import. What's the output of:
Code:
lsblk -o path,fstype
zpool import -d /dev/disk/by-id
?
 
  • Like
Reactions: seithan
Hi,

HA needs shared storage, the only exception is if you have active ZFS replication between the nodes (in which case data since the last replication can be lost if a node fails).


Does not seem like pools with those names are available for import. What's the output of:
Code:
lsblk -o path,fstype
zpool import -d /dev/disk/by-id
?
resolved, thank you