[SOLVED] local-zfs lost after joining cluster

Feb 27, 2020
43
3
28
51
Hi
I have an existing proxmox 6.4 node with containers and vms which I have converted to a cluster to add a new node. This original node is using local-lvm thin for the guests (so basically I have local and local-lvm storages). The new node is having ZFS storage, one single raidz1, and I see on the web-ui local and local-zfs storage.

If I keep both nodes as independent I can create guests in the new node in local-zfs without any issue. If add this new node to the cluster (after a full reinstall to be completely clean and empty) the local-zfs storage is lost and only have a "local" storage with all the space in the original local-zfs storage and I can not create guests on it.

am I doing something wrong? Is there a way to have on the same cluster these two different types of storages?

Apologies if this is explained somewhere else, I have gone through the cluster requirements and process to join a cluster for 6.x and there is no mention of such requirement so I assume there is something I am not doing.

I already have 2 clusters (production, 9 nodes, subscription, and development, 6 nodes), but all are using exactly the same local-lvm configuration, but this is the first time I experience this when adding a new node to a cluster.
 
Hi,
no, this is expected. The storage configuration lives in /etc/pve/storage.cfg, and /etc/pve is where the cluster filesystem is mounted, which is shared across all nodes in the cluster. Any node joining will get the configuration from the cluster. The local /etc/pve is backed up (in database form) in /var/lib/pve-cluster/backup prior to joining.

You can simply re-add the local-zfs storage (Datacenter > Storage > Add > ZFS in the UI) and tell Proxmox VE that it's only available on a specific node by restricting the nodes for the storage. And you'll also want to restrict the nodes for local-lvm to not include the new node.
 
Thanks Fiona, i have gone through it and it works perfectly. I was expecting this to happen automatically somehow, but its quickly fixed. I’d suggest though some note or comment in the documentation when adding a node to the cluster.


My problem now however is different, it seems i can not migrate containers from the old node, which uses local-lvm to the new one which uses local-zfs. I have manage to migrate kvms, but not Lxc containers. Is there any way of doing this? Basically this prevent this to move all the containers to the new node, shutdown the old one and reinstall it with ZFS, which is my final goal.

Thanks
 
Thanks Fiona, i have gone through it and it works perfectly. I was expecting this to happen automatically somehow, but its quickly fixed. I’d suggest though some note or comment in the documentation when adding a node to the cluster.
Yes, mentioning this in the documentation is a good idea.

My problem now however is different, it seems i can not migrate containers from the old node, which uses local-lvm to the new one which uses local-zfs. I have manage to migrate kvms, but not Lxc containers. Is there any way of doing this? Basically this prevent this to move all the containers to the new node, shutdown the old one and reinstall it with ZFS, which is my final goal.
IIRC offline storage migration between ZFS and LVM-thin is not supported currently. As a workaround you could backup the containers on the one node and restore them on the other.
 
I found this thread and answer helpful for clarifying why my local ZFS pool disappeared after joining a system in to a cluster. The documentation says:

If the node’s storage layout differs, you will need to re-add the node’s storages, and adapt each storage’s node restriction to reflect on which nodes the storage is actually available.

But I feel like this could still be a lot more clear. It might also help to clarify that while you can storage pools with the same name on different systems, they can't have the same label (ID) once joined to the cluster. For that reason it might be good to name storage pools uniquely (i.e. maybe they include the name of the local Proxmox server).
 
Last edited:
For that reason it might be good to name storage pools uniquely (i.e. maybe they include the name of the local Proxmox server).
The cluster-concept in itself and the ability to migrate VMs and LXCs (also automatically via HA) from one node to another is based on the availability of a storage with the same identical name. (The same is true for the naming of multiple networks - they need to have the same name to get correctly recognized.)

So... no. Make sure you have storage with the same name on all machines in a cluster!

(Execptions are ok. Some of my machines have another disk for special purposes. But then this is not a cluster-ressource.)

Best regards
 
Thanks for your comment. In my case we have local ZFS pools, and Proxmox won't let you have the same ID (what you see in the GUI) for multiple ZFS pools, although it will allow you to have the same underlying zpool name. My preference would be to have the same ID and same name for my multiple local ZFS pools, but this doesn't appear possible.
 
Hi,
Thanks for your comment. In my case we have local ZFS pools, and Proxmox won't let you have the same ID (what you see in the GUI) for multiple ZFS pools, although it will allow you to have the same underlying zpool name. My preference would be to have the same ID and same name for my multiple local ZFS pools, but this doesn't appear possible.
the storage configuration is cluster-wide. So if you define a local storage, it is expected to be present on each node with the same underlying configuration. You can restrict this with the nodes option (e.g. when editing the storage in the UI). E.g.
Code:
zfspool: present-on-each-node
    pool zfs
    content rootdir,images
    mountpoint /zfs

zfspool: present-on-two-nodes
    pool other
    content rootdir,images
    mountpoint /other
    nodes pve8a1,pve8a2

So you don't need to define the same storage for each node you have, just once if the configuration matches.
 
Last edited:
Thanks Fiona that makes a lot of sense, I've reverted back to having the same local storage pool configurations on each node. I think the confusion came when I built a second PVE server, created a local zpool, joined it to the cluster, and then had issues seeing that storage through the web UI. Also, there may have been some confusion about Datacenter -> Storage vs. Datacenter -> $node -> Disks -> ZFS.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!