no access to zfs-pool of other node

Feb 19, 2019
Hi All,

I'm pretty new to Proxmox and very faszinated of this Software.

I shortly installed a cluster with two nodes s12 and s13.

s12 has this zfs pools:
rpool -> OS - 2* ssd
vmpool01 -> daten - 4*nvme

s13 has this zfs-pools:
zfs pools:
rpool -> OS - 2* ssd
vmpool02 -> daten - 4*nvme

Now in the GUI i cannot see vmpool02, vmpool01 is present.
I cannot move a vm from one node to the other.

When I activate vmpool01 for the two nodes the following maessage appears:

could not activate storage 'vmpool01', zfs error: cannot import 'vmpool01': no such pool available (500)

Perhaps you can help me to resolv the problem.
Perhaps its a dumb beginners misconfiguration...

Greetings from Munich

In your case, it makes more sense to have the same name for vmpoolX.
If your pool on both nodes has the same name you can migrate and you the replica.
If not you have to use the command line.
 qm help migrate

thank you for your reply!

What would have been the best practise in my case?
We want to add more nodes to the cluster, so should I name the new pool vmpoolX?
Or should I add an additional machine for only storage?
Can I rename the zfs-pool of the running node?

Sorry for the perhaps dumb beginners questions.I have been reading Proxmox Manuals for a few months now, but I am still a greenhorn with Proxmox.

Greetigs from Munich
If you have a symmetric layout like yours it is best practice to use the same name for the pool.
This makes life much easier ;-)
Different names are only recommended if you have pools with different characteristics like speed, size, or exclusive use.
You can do this under more options.
I have a similar problem and I am trying to have my 2 nodes with a symmetric layout (and I have a 3rd node I wish to use as archive backup).
On node 1 I have the system (6.2-15) with local and local-lvm that went on as default. I added a 1TB ssd zfs 'ssd-storage' with some VM/CT now on there.
I now added a 1TB ssd to node 2 and tried to (as root through the UI) create a zfs on this node with the same name 'ssd-storage' using this new disk to then provide HA/replica between the 2 nodes, but get "storage ID 'ssd-storage' already defined (500)".
The cluster storage sees all these zfs names, but not locally on the node with zfs. Also I have tried the zfs 'ssd-storage' as Nodes: = All unrestricted, and also selecting just the node it is on, but all still fails. When it is Unrestricted the storage appears in the UI, but has a question mark as unavailble/
Any suggestions? thanks!
I found it, I needed to uncheck the 'Add Strorage' as Aaron had noted in another thread ... the fine print :)

Why do you think this is so?

In fact the storage ID/name needs to be the same across all nodes in the cluster for replication to work.
If you create a new ZFS pool on a node take a look at the "Add Storage" checkbox right below the name field. If you created it already on another node you need to make sure to disable it.
Once you have created the ZFS pool on all nodes edit the storage properties on the datacenter level and select all nodes on which you have the pool.

Best regards,


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!