Shared storage ZFS - Cluster + Bay SAS HBA card

misterju

Member
Mar 31, 2022
11
1
8
Hello
Migrating a WMware cluster of 3 hosts to a Promox cluster and a storage bay connected to the hosts with 2 SAS HBA cards per host.
I want shared storage to be able to migrate VMs.
Currently 2 hosts are mounted in the Proxmox cluster and connected to the bay.
I've enabled multipath on servers
Bash:
apt install multipath-tools
modprobe dm_multipat
multipath -a 36000d31004000e000000000000000009
multipath -r
/etc/multipath.conf on 2 servers
Bash:
defaults {
    polling_interval        2
    path_selector           "round-robin 0"
    path_grouping_policy    multibus
    uid_attribute           ID_SERIAL
    rr_min_io               100
    failback                immediate
    no_path_retry           queue
    user_friendly_names yes
}

blacklist {
    wwid .*
}

blacklist_exceptions {
    wwid "36000d31004000e000000000000000009"
}
and
/etc/multipath/wwids on 2 servers
Bash:
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/36000d31004000e000000000000000009/
result on 2 servers:
pve1
Bash:
root@pve1:~# multipath -ll
mpatha (36000d31004000e000000000000000009) dm-2 COMPELNT,Compellent Vol
size=5.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 14:0:1:1 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 14:0:0:1 sda 8:0  active ready running
pve3
Bash:
root@pve3:~# multipath -ll
mpatha (36000d31004000e000000000000000009) dm-5 COMPELNT,Compellent Vol
size=5.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 1:0:0:1 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 1:0:1:1 sdc 8:32 active ready running
I created a ZFS pool on the 2 hosts, mounted ZFS storage on the cluster.
Bash:
zpool create -f -o ashift=12 pool-zfs /dev/mapper/mpatha
zpool command results
pve1
Bash:
root@pve1:~# zpool list
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-zfs  4.98T   536K  4.98T        -         -     0%     0%  1.00x    ONLINE  -
root@pve1:~# zpool status
  pool: pool-zfs
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        pool-zfs    ONLINE       0     0     0
          mpatha    ONLINE       0     0     0

errors: No known data errors
pve3
Bash:
root@pve3:~# zpool list
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool-zfs  4.98T   504K  4.98T        -         -     0%     0%  1.00x    ONLINE  -
root@pve3:~# zpool status
  pool: pool-zfs
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        pool-zfs    ONLINE       0     0     0
          mpatha    ONLINE       0     0     0

errors: No known data errors

I create zfs storage on the cluster

zfscluster.png

Storage appears on 2 hosts, but only pve1 contains VM disks.

pve1.pngpve3.png

Did I miss a step so that all the hosts could see the same storage?

I would be very grateful if you could provide me with your expertise or experience on this problem.

have a nice day
 
Also, when I restart the hosts, the ZFS pool disappears, so I have to force the import

Bash:
root@pve1:~# pvesm status
zfs error: cannot open 'pool-zfs': no such pool

cannot import 'pool-zfs': pool was previously in use from another system.
Last accessed by pve3 (hostid=3f940953) at Fri Jan 17 11:26:26 2025
zfs error: cannot open 'pool-zfs': no such pool

could not activate storage 'VMs-BAIE', zfs error: The pool can be imported, use 'zpool import -f' to import the pool.

Name            Type     Status           Total            Used       Available        %
VMs-BAIE     zfspool   inactive               0               0               0    0.00%
local            dir     active        10218772         3338544         6339556   32.67%

how to keep the zfs pool permanent.

Thank you very much.
 
  • Like
Reactions: UdoB
Hi @misterju ,
ZFS is _not_ a cluster suitable file system. You should not be trying to use it in the above application.

Please see the article here https://forum.proxmox.com/threads/understanding-lvm-shared-storage-in-proxmox.160693 that may help



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you very much for your reply. It is therefore impossible to make snapshots on shared storage in a Proxmox cluster, as it only manages LVM. It's a problem not to be able to use the snapshot service.
 
There are other "shared storage" solutions that support snapshots.
Search the forum/google for OCFS2 implementation/configuration
Thats disingenuous... no one supports that.

If we're talking about self support, snapshot support is available via your compellent interface, but understand that there is no tooling included to control guest quiescence or reversion/remounting of said snapshots. You'll need to build and support either of these yourself.

--edit I misunderstood "support" in your original context and then preceeded to confuse "support" in the "does it work" sense and the "will someone commit to it actually working" sense too :D @bbgeek17 is right. but the point stands.
 
Last edited:
Thats disingenuous... no one supports that.
I did not mean it that way. I was merely pointing Op towards a solution that they can research and make their own choices about. This solution has its fans here on the forum and even some guides, which is more than can be said about writing your own storage support.

In fact, we just recently talked to a prospect that is actively using OCFS2 with commercial support from Oracle.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!