ZFS Cluster Sharing Issue

mark_ksu

New Member
Dec 27, 2020
4
0
1
50
Created a simple two node cluster (6.3.2) to test ZFS sharing and having problems. Below are the steps I took and the problem I'm seeing.

1. Created 2 node cluster (pve01 & pve02) - no errors. Did not create any shared storage at this point, just bare bones cluster.
2. Created zfs01 on pve01 without any issues.
3. On the main cluster I created a new mainZFS using zfs01 on pve01 and did not restrict to any nodes.
4. Everything shows up fine on pve01 just fine. The problem I'm having is with pve02.

I see the mainZFS show up but is has a ? in the icon. When I click on the icon I get the error message:
"could not activate storage 'mainZFS', zfs error: cannot open 'zfs01': no such pool (500).

I thought when using a cluster that a ZFS pool could be accessed by all nodes of the cluster as long as there were no restrictions. Any advice would be appreciated.

As info: Did the same test with a directory and basically the same error is occuring. I can see it in the other node but get an error:
"unable to activate storage 'dir' - directory is expected to be a mount point but is not mounted: '/mnt/pve/dir' (500)

Your help would be appreciated. Thank you!

Troubleshoot data:

root@pve01:~# pvecm status
Cluster information
-------------------
Name: Signal
Config Version: 2
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Sat Dec 26 18:56:54 2020
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.9
Quorate: Yes

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.0.0.11 (local)
0x00000002 1 10.0.0.12
 
If that is local storage the other cluster members don't magically get access to it. It is the other way round: by defining that storage as local every cluster member is looking for that storage locally.
You can define a ZFS with the same name on every host, then the error should be gone. But you either have to use a clustered file system (ceph for example) or a central storage server.
 
That makes sense. I will change my test to use ceph and report back. Thank you for the quick response!!!
 
As already mentioned, ZFS is not a cluster file system. What you can do, if it works with your usecase, is to create the same zpool on both nodes (don't enable the "add storage" checkbox when creating the pool on the second node).

Then use ZFS replication for the needed VMs to replicate the VMs disk to the other node. You can enable HA for these VMs but you should be aware that depending on the time interval for the replication, you will have some data loss if the node where the VM is running on dies.
Therefore, if you can configure HA on an application level (databases for example) you should do it there.

Also be aware that on a 2 node cluster you will need a third vote for the cluster to still be quorate if one of the nodes die (having the majority of vote). If you cannot add a third node, have a look at the QDevice mechanism which will need a service installed on some machine outside the cluster to provide a third vote, and thus a majority if one node dies.

https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support
 
  • Like
Reactions: UdoB

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!