[SOLVED] Shared SAS storage for Proxmox cluster 2023

IRO

New Member
Mar 7, 2023
9
1
3
Hi all,

I'd like to build a 3 node cluster with a shared SAS storage (Dell MD3200 with 2 controllers).
Every server has two connections to the storage, one for each controllers. Multipath is working, nodes all see the storage like their local disk.
I would like to store both virtual machine images and lxc containers on the storage.
I need the following features of Proxmox: snapshot, cloning, migration (live for some VMs), migration with snapshots if possible and HA for some VMs and LXCs.

After reading everything I could find, and trying some scenarios I'm still stuck.
What file system should I use on the SAS storage that supports all these features?
 
I would like to store both virtual machine images and lxc containers on the storage.
You have two options:
1) LVM (thick) can be placed on top of a shared (SAS, iSCSI, etc) storage. Proxmox will automatically manage volume creation/assignment.
2) Install, configure and manage one of the available Cluster Aware File Systems. Use the resulting space as "directory" type storage in Proxmox and utilize QCOW file format.

I need the following features of Proxmox: snapshot, cloning, migration (live for some VMs), migration with snapshots if possible and HA for some VMs and LXCs.
Snapshots are not possible with LVM thick (option 1). You will get thick cloning, live migration and HA as much as PVE supports it.
You will get snapshots with Cluster File System (option 2).
What file system should I use on the SAS storage that supports all these features?
Proxmox doesn't include a supported Cluster Aware Filesystem for physically shared storage. You will need to find one that you are comfortable installing and supporting.
https://en.wikipedia.org/wiki/Clustered_file_system

P.S. the most common FS choice is OCFS2 https://manpages.ubuntu.com/manpages/bionic/man7/ocfs2.7.html


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
Snapshots are not possible with LVM thick (option 1). You will get thick cloning, live migration and HA as much as PVE supports it.
You will get snapshots with Cluster File System (option 2).
Thank you bbgeek17.
Just to be sure, option 2 gives all the functions, right?
And for Proxmox it doesn't matters which FS I choose, it handles all of them like a directory.
 
Just to be sure, option 2 gives all the functions, right?
yes, the Filesystem/Qcow will give you access to as many options as Qcow supports within Proxmox infrastructure. Everything what you listed should be there.
And for Proxmox it doesn't matters which FS I choose, it handles all of them like a directory.
Correct, from PVE perspective its just a mountpoint/directory.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I did choose OCFS2 before reading the last reply, but that doesn't really matters, because we don't have a subscription, just testing Proxmox.
Anyway that seemed the simplest to try.

So far it works, but I realized that I cannot create snapshots of LXC containers, only VMs.
According to bbgeek17's explanation that can't be because of OCFS2, as Proxmox doesn't care about the underlying file system, it's the limitation of the directory storage type. Is that right?
 
Last edited:
I dont use LXCs with storage other than Blockbridge normally, which supports snapshots for both VMs and LXC. So I had to look this up for you:

https://pve.proxmox.com/wiki/Linux_Container#pct_container_storage

Any storage type supported by the Proxmox VE storage library can be used. This means that containers can be stored on local (for example lvm, zfs or directory), shared external (like iSCSI, NFS) or even distributed storage systems like Ceph. Advanced storage features like snapshots or clones can be used if the underlying storage supports them. The vzdump backup tool can use snapshots to provide consistent container backups.

So LXC relies on backend storage for taking snapshots, if it is capable.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
...shared external (like iSCSI, NFS)...
That's strange, because I've already tried iSCSI (from another storage), and couldn't even create containers on them.
According to another page of the same wiki that's normal, as iSCSI can only store vm disks: https://pve.proxmox.com/wiki/Storage:_iSCSI


So LXC relies on backend storage for taking snapshots, if it is capable.
Ok, to clarify:
Backend capable of taking snapshots means capable in Proxmox, right?
So let's suppose I have a Btrfs disk, not shared, just a local disk. If I would add it as a Btrfs storage snapshots would work. But if instead of Btrfs I add it as a directory then no snaphosts would be available. (Unfortunately I can't find Btrfs as a storage in the wiki, so no confirmation from there.)
So no matter what I do with my shared-disk storage, snapshots for containers won't work, because the only way to add it is a shared a directory.
(Unless we count LVM, but that's even more limited in use)
Is that right?
 
That's strange, because I've already tried iSCSI
I think they mean iSCSI with LVM-thick on top of it for native PVE storage plugin.

Backend capable of taking snapshots means capable in Proxmox, right?
Correct. Although you can take snapshots directly on unsupported storage. There is just no integration with PVE obviously.
If you haven't seen it yet here is the matrix of supported features for built-in Proxmox storage drivers: https://pve.proxmox.com/wiki/Storage
Is that right?
I think that's a good summary for what you have to work with.

Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
you need to use classic lvm thick (not thin) and check the "shared" box in storage option.

You need to create a lvm volume groupfirst, on the host->disks, before adding the storage in datacenter->strage
 
OP is searching for a solution that gives him Shared+Snapshot, LVM thick only provides first part.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
They are no solution currently. (personnaly I would avoid ocfs2/gfs2 cluster filesystem, I have tried some years ago, I only had lock problem).

Another possiblity , but overkill, is building ceph on top of the san ^_^
 
They are no solution currently.
Well, yours truly begs to differ :-) we provide shared storage solution with snapshots, clones and soon more.

Within constraints of @IRO infrastructure, assuming OCFS2 works perfectly, the other solution is not to run containers from PVE. Build a sufficiently resourceful VM and use Docker/Rancher/etc which provide snapshot capability, if thats a requirement.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Well, yours truly begs to differ :) we provide shared storage solution with snapshots, clones and soon more.

I mean, with a basic Dell MD3200 without any kind of api. ^_^


Another cheap solution could be 2 synology with sync replication, and nfs 4.1 multipath.
I have 3-4 customer using it, failover is working without disruption.

or a zfs array


(Offtopic: do you have already blockbridge reseller in Europe ? I had a customer last month asking me about it)
 
  • Like
Reactions: bbgeek17
Another cheap solution could be 2 synology with sync replication, and nfs 4.1 multipath.
I have 3-4 customer using it, failover is working without disruption.
As far as I know there are no container snapshots on NFS either.
At least that's in my notes made during my tests.

or a zfs array
ZFS replication seemed nice, but that one can't migrate VMs with snapshots.

That's why I came here asking, because none of the solutions I found gave all the features I asked for.

Ceph is something I'll definitely will try, just for fun, but can't use on the current infrastructure.

This Blockbridge you are talking about, I bet uses some custom filesystem plugin or something like that.
Otherwise it would face the same limitations as all the other storage types.
 
Last edited:
This Blockbridge you are talking about, I bet uses some custom filesystem plugin or something like that.
Otherwise it would face the same limitations as all the other storage types.
Blockbridge is a Software Defined Storage product that provides datacenter-grade iSCSI and NVMe/TCP block storage (i.e., high-performance, high-availability, transparent upgrades, etc.). Our product is designed for complex automation, using an API-first approach (but of course, we have a nice GUI).
For Proxmox, we actively develop and test a storage plugin that natively integrates with the low-level Proxmox API. We studied the failure modes and scale issues and tailored our integration to address them the simplest way possible. We've backed this with continuous validation and fault testing to ensure every release works as expected.
Our Plugin allows us to leverage advanced features in our storage and automates complex tasks like multipathing. In Proxmox, you can reliably perform all operations (i.e., provisioning, attach, delete, HA, migration, snapshot, clone, rollback, etc.) using any of the Proxmox interfaces (API, CLI, GUI). On storage, the benefits are the features we implement (thin-provisioning, thin-snapshots, thin-clones, secure multi-tenancy, QoS, data-reduction, etc.). There are no kernel drivers or similar software.
So, we're not a cluster filesystem nor a custom filesystem plugin. We're a smart block storage system that uses the native capabilities of Proxmox.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
What if you just drop the container requirement and just go with a fully virtualized system? Then OCFS2 will be sufficient for everything with QCOW2 format. I don't see any point in running containers in a HA-Cluster, you cannot live-migrate containers, so this is a no-brainer for me. Without live-migration, I cannot have HA.
 
What if you just drop the container requirement and just go with a fully virtualized system? Then OCFS2 will be sufficient for everything with QCOW2 format. I don't see any point in running containers in a HA-Cluster, you cannot live-migrate containers, so this is a no-brainer for me. Without live-migration, I cannot have HA.
Correct

we have been used proxmox with HPE SAS shared SAS for years.

GFS2 file system with DLM .

but its tricky there alot of learning curve. that is why we moved all new scenario to CPEH to ease the maintenances
 
What if you just drop the container requirement and just go with a fully virtualized system? Then OCFS2 will be sufficient for everything with QCOW2 format. I don't see any point in running containers in a HA-Cluster, you cannot live-migrate containers, so this is a no-brainer for me. Without live-migration, I cannot have HA.
It seems a logical approach now, that I know there is no storage system fulfilling all the requirements. (I mean except Blockbridge :) )
Thanks everyone for the help.
 
  • Like
Reactions: bbgeek17
It seems a logical approach now, that I know there is no storage system fulfilling all the requirements. (I mean except Blockbridge :) )
Thanks everyone for the help.
Hi IRO, what are your conclusions after several months? Does this solutions works reliably, and perhaps in straight forward way (not being a management pain)?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!