Hi all,
I don't think it's a huge deal, but somehow I suspect that I may be missing something.
I initially configured a second Proxmox server with the same zfspool name as the first one (system-wise), though it was a different local zfspool name.
Upon joining the cluster, the newly created zfspool name was squashed, further showing only the zfspool name of the first server. As mentioned, they both had the same rpool name with a similar path 'rpool/data' under zfspool name 'data01'.
I did not like the fact that the mapping was showing only one zfspool since I actually had 2 different ones. Before going further, I:
- renamed the pool on the new server
- 'pvesm remove data01' (removed the zfs pool cluster-wide, I broke a sweat and re-created it from the first node, it's OK)
- I created a new zfs pool with a different name on the new node
Now it looks alright, expect the 'pvesm status' command that returns error for the non-local zfs pool on each node
Typically:
My question is: is that fully expected, or is there some flag / option I can set up to tell pvesm on each node which zfspool are local and which are not, to avoid the error message? Maybe something to do with 'target' option?
I don't think it's a huge deal, but somehow I suspect that I may be missing something.
I initially configured a second Proxmox server with the same zfspool name as the first one (system-wise), though it was a different local zfspool name.
Upon joining the cluster, the newly created zfspool name was squashed, further showing only the zfspool name of the first server. As mentioned, they both had the same rpool name with a similar path 'rpool/data' under zfspool name 'data01'.
I did not like the fact that the mapping was showing only one zfspool since I actually had 2 different ones. Before going further, I:
- renamed the pool on the new server
- 'pvesm remove data01' (removed the zfs pool cluster-wide, I broke a sweat and re-created it from the first node, it's OK)
- I created a new zfs pool with a different name on the new node
Now it looks alright, expect the 'pvesm status' command that returns error for the non-local zfs pool on each node
Typically:
Code:
# # From node 1
# pvesm status
zfs error: cannot open 'amethyst': no such pool
zfs error: cannot open 'amethyst': no such pool
could not activate storage 'zfs-amethyst', zfs error: cannot import 'amethyst': no such pool available
# # From node 2
# pvesm status
zfs error: cannot open 'rpool': no such pool
zfs error: cannot open 'rpool': no such pool
could not activate storage 'data01', zfs error: cannot import 'rpool': no such pool available
My question is: is that fully expected, or is there some flag / option I can set up to tell pvesm on each node which zfspool are local and which are not, to avoid the error message? Maybe something to do with 'target' option?