[SOLVED] Sharing zpool between a 2 node installation?

nuvious

New Member
Nov 28, 2020
4
3
3
39
Hey,
Trying to share a zpool between 2 installations to play around with HA. I'm not new to proxmox but am new to clustering and shared storage between nodes so any help is appreciated and any comments as to how I could do this cleaner/smarter would be appreciated. The goal is to have the storage pools between my nodes (pve and pve-alt) shared between each other and for certain VM's to be set to HA between the two.
Right now if I try to access storage-pool (on pve) from pve-alt I get the following error message:

could not activate storage 'storage-pool', zfs error: cannot open 'storage-pool': no such pool (500)


In the UI I have each node's storage pools available to each other:
1606582920088.png
Below are the configurations for PVE and PVE-ALT:

pve /etc/pve/storage.cfg
dir: local path /var/lib/vz content backup,vztmpl,iso lvmthin: local-lvm thinpool data vgname pve content rootdir,images lvmthin: local-ssd thinpool local-ssd vgname local-ssd content images,rootdir nfs: unraid-backup export /mnt/user/ProxmoxBackup path /mnt/pve/unraid-backup server 192.168.11.200 content snippets,vztmpl,backup,rootdir,images,iso maxfiles 3 zfspool: storage-pool pool storage-pool content rootdir,images mountpoint /storage-pool nodes pve-alt,pve sparse 0 zfspool: storage-pool-alt pool storage-pool-alt content rootdir,images mountpoint /storage-pool-alt nodes pve,pve-alt sparse 0

pve-alt /etc/pve/storage.cfg
dir: local path /var/lib/vz content backup,vztmpl,iso lvmthin: local-lvm thinpool data vgname pve content rootdir,images lvmthin: local-ssd thinpool local-ssd vgname local-ssd content images,rootdir nfs: unraid-backup export /mnt/user/ProxmoxBackup path /mnt/pve/unraid-backup server 192.168.11.200 content snippets,vztmpl,backup,rootdir,images,iso maxfiles 3 zfspool: storage-pool pool storage-pool content rootdir,images mountpoint /storage-pool nodes pve-alt,pve sparse 0 zfspool: storage-pool-alt pool storage-pool-alt content rootdir,images mountpoint /storage-pool-alt nodes pve,pve-alt sparse 0

Thanks in advance for any advice!
 
Hey, found the answer to my own problems by basically watching this youtube video:

(99) ProxMox High Availability Cluster! - YouTube

I highly recommend this as it's short, to the point, and is the right way to start from the beginning. I have more issues I have to resolve based on my initial misconfiguration but otherwise I've got a path forward.
 
Hey, I was bothering with the same problem you had but I have a lot of data on one of the drives. So I couldn't follow the way you suggested. I solved it like the following:

Execute on one of the cluster nodes that already have the ZFS mounted:
$ zpool list

Create zpool with the same name:
$ zpool create rfci-cts /dev/nvme0n1

Go to one of the nodes in the cluster and add the node you want to include to the rfci-ct. Therefore edit the /etc/pve/storage.cfg

Before:
Code:
zfspool: rfci-lxc-zfs
       pool rfci-cts
       content images,rootdir
       sparse
       nodes fpga

After:
Code:
zfspool: rfci-lxc-zfs
       pool rfci-cts
       content images,rootdir
       sparse
       nodes fpga,rfci-virt-master0

Caution: Comma separated, no space!!

Before editing I would recommend to do a backup of the storage.cfg, just in case something gets messed up.

Cheers!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!