pvesm no such pool (proxmox cluster)

kohl42

Member
Feb 4, 2017
2
0
21
42
Hi all,

I don't think it's a huge deal, but somehow I suspect that I may be missing something.

I initially configured a second Proxmox server with the same zfspool name as the first one (system-wise), though it was a different local zfspool name.

Upon joining the cluster, the newly created zfspool name was squashed, further showing only the zfspool name of the first server. As mentioned, they both had the same rpool name with a similar path 'rpool/data' under zfspool name 'data01'.

I did not like the fact that the mapping was showing only one zfspool since I actually had 2 different ones. Before going further, I:
- renamed the pool on the new server
- 'pvesm remove data01' (removed the zfs pool cluster-wide, I broke a sweat and re-created it from the first node, it's OK)
- I created a new zfs pool with a different name on the new node

Now it looks alright, expect the 'pvesm status' command that returns error for the non-local zfs pool on each node

Typically:

Code:
# # From node 1

# pvesm status
zfs error: cannot open 'amethyst': no such pool

zfs error: cannot open 'amethyst': no such pool

could not activate storage 'zfs-amethyst', zfs error: cannot import 'amethyst': no such pool available


# # From node 2

# pvesm status
zfs error: cannot open 'rpool': no such pool

zfs error: cannot open 'rpool': no such pool

could not activate storage 'data01', zfs error: cannot import 'rpool': no such pool available

My question is: is that fully expected, or is there some flag / option I can set up to tell pvesm on each node which zfspool are local and which are not, to avoid the error message? Maybe something to do with 'target' option?
 
Hi,
My question is: is that fully expected, or is there some flag / option I can set up to tell pvesm on each node which zfspool are local and which are not, to avoid the error message? Maybe something to do with 'target' option?
yes, this is expected. There can only be a single configuration for a given storage ID. If the configurations do not match, you need to create two storage. Use the nodes property to restrict to the node(s) where the storage is actually available (editing the storage in the GUi and selecting the node(s) is easiest).

I'm not sure which target option you are referring to (see also man pvesm if you are unsure about some options).

Also note that you won't be able to use PVE's guest replication if you don't have the same storage (configuration) on both nodes.
 
Use the nodes property to restrict to the node(s) where the storage is actually available (editing the storage in the GUi and selecting the node(s) is easiest).
That's exactly what I was looking for. I should use the GUI more often I guess.

Thanks for the quick response!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!