ZFS Pool

frankz

Active Member
Nov 16, 2020
364
24
38
Hi everyone, I have a cluster with 3 nodes and in the last node just added I can't see a local ZFS resource of the latter.
I was told that the problem arises from the fact that the pool does not have the same ZFS pool name as the cluster. Not very experienced in the proxmox cluster, what I was told is correct. How can I make this local zfs pool available to the whole cluster without destroying it? Thank you all .
 
I have a cluster with 3 nodes and in the last node just added I can't see a local ZFS resource of the latter.
The storage configuration for a ZFS storage is based on the dataset name (<pool>/<path>/<to>/<dataset>). If that is different on the new node, you have two options:
  • Add another storage with the new pool and limit it to that one node.
    • ZFS replication won't work as it needs the same storage/pool name
  • Rename the pool so it matches with the other nodes
    • To do so, assuming it is not the rpool is to export it and during import rename it
    • Code:
      zpool export <pool>
      zpool import <pool> <new pool name>
 
\
The storage configuration for a ZFS storage is based on the dataset name (<pool>/<path>/<to>/<dataset>). If that is different on the new node, you have two options:
  • Add another storage with the new pool and limit it to that one node.
    • ZFS replication won't work as it needs the same storage/pool name
  • Rename the pool so it matches with the other nodes
    • To do so, assuming it is not the rpool is to export it and during import rename it
    • Code:
      zpool export <pool>
      zpool import <pool> <new pool name>
Thanks . So if I export the current zon label pool zfs-pool with new label equal 1 node of the local-zfs cluster should I be able to see the pool on the 3 nodes? Thanks
 
So if I export the current zon label pool zfs-pool with new label equal 1 node of the local-zfs cluster should I be able to see the pool on the 3 nodes?
Do I understand you correctly, that you mean the storage named `local-zfs` that you would like to have on the 3rd node?

Were the first 2 nodes installed with ZFS as root FS and the 3rd not and not it only shows a `local-lvm` storage?

Can you show us the situation, so we have the whole picture?
Contents of /etc/pve/storage.cfg, output of zfs list on one of the first 2 nodes and the 3rd?
 
Do I understand you correctly, that you mean the storage named `local-zfs` that you would like to have on the 3rd node?

Were the first 2 nodes installed with ZFS as root FS and the 3rd not and not it only shows a `local-lvm` storage?

Can you show us the situation, so we have the whole picture?
Contents of /etc/pve/storage.cfg, output of zfs list on one of the first 2 nodes and the 3rd?
So, the first node is the cluster with its local zfs pool, which I can't show to the other 2 nodes, maybe because the local zpool is not shareable by the cluster. The 3 nodes I see correctly 2 share nfs on two nas. The problem of the 3rd node is that locally to the node itself I see in the zfs pool disks, but in the 3rd node in the storage list the zfs pool is missing that I see in the disk list> zfs. I hope I have confused you, However I am attaching the 3 files.
 

Attachments

So, the first node is the cluster with its local zfs pool, which I can't show to the other 2 nodes, maybe because the local zpool is not shareable by the cluster.
Ah okay. There is the misunderstanding. ZFS is a local filesystem only.

If the other nodes have a pool available with the same name you can utilize replication of VM disks between the nodes.

If certain storages are only available on specific nodes, you can tell PVE so by limiting the storage to these nodes. In the storage edit window it should be the drop down in the top right.
 
Ah okay. There is the misunderstanding. ZFS is a local filesystem only.

If the other nodes have a pool available with the same name you can utilize replication of VM disks between the nodes.

If certain storages are only available on specific nodes, you can tell PVE so by limiting the storage to these nodes. In the storage edit window it should be the drop down in the top right.
Ok aaron, but if for example I have a volume on node 2 that the cluster sees in the datastore list and I try to give it instructions to make it available for all nodes and put the check on volume share the following appears:
 

Attachments

  • Schermata 2020-11-17 alle 14.48.27.png
    Schermata 2020-11-17 alle 14.48.27.png
    13.2 KB · Views: 14
give it instructions to make it available for all nodes
What do you mean by that? By checking the "shared" checkbox?

This does not do any magic in the background but only tells PVE that this (directory) storage is mounted outside the PVE tooling but is still a shared storage accessible by all nodes.

If it does not exist on a node you will see the question mark. From the storage.cfg that you shared earlier I can see that volume2 is a directory based storage which has is_mountpoint 1 configured. That is good because with that flag PVE knows that this should not be a directory storage on the local disk but to expect something mounted at that path. As long as there is nothing mounted it will not activate that storage.

You probably configured on the other node(s) either via the /etc/fstab or a systemd unit to mount some share on that directory. You will also have to do that on the other node(s). If you want a NFS or CIFS share show up on all nodes without much manual intervention, you will have to configure a NFS or CIFS storage via the PVE tooling in order for PVE to be responsible for it to be mounted.

I hope that helps to clear up some misunderstandings :)
 
What do you mean by that? By checking the "shared" checkbox?

This does not do any magic in the background but only tells PVE that this (directory) storage is mounted outside the PVE tooling but is still a shared storage accessible by all nodes.

If it does not exist on a node you will see the question mark. From the storage.cfg that you shared earlier I can see that volume2 is a directory based storage which has is_mountpoint 1 configured. That is good because with that flag PVE knows that this should not be a directory storage on the local disk but to expect something mounted at that path. As long as there is nothing mounted it will not activate that storage.

You probably configured on the other node(s) either via the /etc/fstab or a systemd unit to mount some share on that directory. You will also have to do that on the other node(s). If you want a NFS or CIFS share show up on all nodes without much manual intervention, you will have to configure a NFS or CIFS storage via the PVE tooling in order for PVE to be responsible for it to be mounted.

I hope that helps to clear up some misunderstandings :)
Thanks for the reply, unfortunately the translation from English to Italian is not the best. However I try to understand better what you mean about the folder. Actually the volume of which I sent you the image is a folder created by the GUI by selecting a disk in ext3 and making it available to the node where it resides. I ask you I have another 1tb disk installed on node 2. It still needs to be formatted. In order for it to be shared on the cluster, what do you advise me to do? Thanks anyway for having me
 
Ah okay.
So volume2 is a local disk connected to one node.
This cannot be shared because only the node to which it is connected to can access it. Disable the "shared" checkbox.
To only see the storage on the node which has the external disk connected, you can edit the storage configuration and in the top right, select the node.

If you want a shared storage between all nodes it needs to be either a shared network storage (NFS, CIFS/Samba/SMB) or something like Ceph or GlusterFS.

The last two options are a bit more complicated and you really should read into the topic before you try to set up anything.
 
Ah okay.
So volume2 is a local disk connected to one node.
This cannot be shared because only the node to which it is connected to can access it. Disable the "shared" checkbox.
To only see the storage on the node which has the external disk connected, you can edit the storage configuration and in the top right, select the node.

If you want a shared storage between all nodes it needs to be either a shared network storage (NFS, CIFS/Samba/SMB) or something like Ceph or GlusterFS.

The last two options are a bit more complicated and you really should read into the topic before you try to set up anything.
Thanks, now it's clear. I was told that if I added a node with a local zfs labeled the same as the cluster's zfs pool, it would be visible to everyone. From what I had read on the internet, in order to share a local resource it is absolutely necessary to rely on ceph or rdb or gluster. So in conclusion it is useless for me to keep trying with local volumes despite being listed in the cluster to share them without the daemons listed above. I currently use nfs on a nas and everything works. I thank you for the time you have dedicated to me and above all for having clarified.
 
With ZFS set up the same on each node you still cannot share it between nodes. But you can set up replication of VMs and containers between the nodes which can be useful for faster migration or failover.

So in conclusion it is useless for me to keep trying with local volumes despite being listed in the cluster to share them without the daemons listed above.
The shared option for a storage is there to tell PVE that this directory is shared. PVE itself will not share it. Useful if you mount a shared storage / file system that is not supported out of the box.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!