Advice for a redundant SAN?

yena

Renowned Member
Nov 18, 2011
377
5
83
What is the best Open Source solution to create a SAN for a 13 node proxmox cluster?
I wouldn't want to use a product that after a year I find the license cost doubled or the company going bankrupt ;-)

In the past I tried with Ceph.. but the snaps were much much slower than ZFS and over time the performance degraded a lot..
then when it has to do its rebalancing operations everything slowed down a lot..
I would be more inclined towards something based on ZFS or DRBD ( Linstore )

Thanks to anyone who wanted to share their experience!
 
Last edited:
CEPH is definitely the way to go for your project. Your problems are incomprehensible to me; I have never experienced this with clusters with 700 OSDs or with 30 OSDs. Tell us more about the equipment of your systems, network, etc.
 
What is the best Open Source solution to create a SAN for a 13 node proxmox cluster?
The first thing to do is to define what exactly "SAN for Proxmox" means for you. Historically SAN is block level storage available over fabrics connection, i.e. fibre channel and today mostly ethernet. There are several protocols that can be used, the most common today are iSCSI and NVMe/TCP.

That said, many people conflate NAS (CIFS, NFS) and SAN and use them interchangeably.

Next you need to decide whether you want to go hyper-converged route or isolated storage approach. Your choices will become more obvious as you define the requirements.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
We've been using Fujitsu Eternus SANs for many, many years without any problems. Those are cheap, dumb boxes providing block storage without any additional costs or bells and wistles. You get what you pay for, all redundant (two controllers, two PSUs, two connect modules) in a 2 HE unit with 24x 2.5''drives.
 
  • Like
Reactions: yena
@LnxBil But then you have a bit of vendor lock-in, the devices are EOL at some point, there are no more updates and patches. If a component breaks, you have to buy special parts that may no longer be freely available. With EMC it's almost impossible to get to the OS if you lose your disks. Creating extensive metrics etc. is often not even possible with such devices. I like EMC too, but somehow I like my CEPH more because I can control everything.
 
But then you have a bit of vendor lock-in, the devices are EOL at some point, there are no more updates and patches. If a component breaks, you have to buy special parts that may no longer be freely available.
The devices have a EOL of at least 7 years including updates. Most customers rotate their hardware every 4 to 5 years, so it's not a problem for them.
Even if you do run the hardware longer (we do), you have the parts used on ebay or can use a after-market support vendor if you really need it. We have over 20 of them in production and they just work, mostly due to the use as just a dumb storage. That is the only reason we buy them and it serves us very well. Never had a problem with them except the GBICs.

Also, you can use the hardware further, if you take them apart. We use the drive enclosures to build our PBS system. They are dual-port SAS enclosures that can easily be connected to commodity enterprise hardware. You just need to reformat the disks from 520 to 512 bytes and are good to go, yet that worked for every vendor I tried. We also built a ZFS SAN out of an very old two man high stack of EVA6400 enclosures for fun.

With EMC it's almost impossible to get to the OS if you lose your disks.
Are you talking about the OS, that has to be installed on the first 3 or 4 disks of your array? If memory serves, this was true on the clarion? Yet I have to say that do not use EMC anymore.

Creating extensive metrics etc. is often not even possible with such devices.
That is true. Metrics is a bit restricted.

I like EMC too, but somehow I like my CEPH more because I can control everything.
Yes, but that is distributed shared storage and not dedicated shared storage.
 
We've been using Fujitsu Eternus SANs for many, many years without any problems. Those are cheap, dumb boxes providing block storage without any additional costs or bells and wistles. You get what you pay for, all redundant (two controllers, two PSUs, two connect modules) in a 2 HE unit with 24x 2.5''drives.
Hello, Thanks for your ongoing contributions and valuable input - This comment stopped me - I looked at those units DX60 S2, I see them available - What software do you use with them, or do you use factory OS?
Do you need a license to use factory OS?

Are you able to offer ISCSI / NFS from these as redundant storage?

Thanks in advance
 
We've been using Fujitsu Eternus SANs for many, many years without any problems. Those are cheap, dumb boxes providing block storage without any additional costs or bells and wistles. You get what you pay for, all redundant (two controllers, two PSUs, two connect modules) in a 2 HE unit with 24x 2.5''drives.

What kind of Filesystem do you use on it ?
Only LVM or also ZFS?

Thanks
 
Last edited:
We use DX100 and DX200 S3 till S5 and the default OS that comes with the SAN. It's a black box and you can have FC, iSCSI and also file services via NFS.


We use it in a cluster, so just thick-LVM. We run a HA-VM on it that exports its virtual disks as ZFS-over-iSCSI to the same cluster.


Thanks intersting.. 10G for iSCSI seems low to me, do you use multipath?
If I have to increase the number of disks, will the operation go smoothly or will I risk breaking everything? .. I'm used to using ZFS where I add groups to pools in a very simple and painless way ..
Is the speed of Snapshopt that of ZFS or slower?

Cheers!
 
Thanks intersting.. 10G for iSCSI seems low to me, do you use multipath?
Yes and we use 32 Gb FC

If I have to increase the number of disks, will the operation go smoothly or will I risk breaking everything?
No, you can add hardware (disks and shelves) dynamically.

I'm used to using ZFS where I add groups to pools in a very simple and painless way ..
Is the speed of Snapshopt that of ZFS or slower?
We do not use snapshots on SAN level.
 
Yes and we use 32 Gb FC


No, you can add hardware (disks and shelves) dynamically.


We do not use snapshots on SAN level.

So you recommend using FC and not iSCSI... is the lower latency evident? Or for some other reason?
If you don't use Snapshot, do you only work with backups?

Se volessi costruire un ambiente di test composto da
3 nodi Diskless Proxmox e due SAN ridondate DX200 S3,
quali switch FC mi consiglieresti ?

Thanks again
 
Last edited:
We run a HA-VM on it that exports its virtual disks as ZFS-over-iSCSI to the same cluster.
But you run a classic SAN with zoning etc. for storage, don't you? The S5 has a maximum of 8 ports with FC 32 Gbit, if you want to connect it redundantly you would have a maximum of 4 hosts per DAS.

Or are there, for example, 3 nodes that are connected directly to the controller via FC and a VM runs in HA mode on exactly these 3 nodes, which has the entire storage of the DAS and then releases it via NFS / iSCSI etc. to use it to use the remaining cluster nodes?

If it really is the latter option, then you have my full respect if you have such a chicken/egg setup running in production with critical applications.

Personally, investing in CEPH seems to me to be a better choice. In order to fulfill the requirement of dedicated, I can also only run selected VMs in a pool or I can easily set up an actually self-sufficient CEPH in just 3 minutes using Croit.
 
But you run a classic SAN with zoning etc. for storage, don't you? The S5 has a maximum of 8 ports with FC 32 Gbit, if you want to connect it redundantly you would have a maximum of 4 hosts per DAS.
That is correct. Depending on the setup, yes, but in larger systems with switches and zoning.


Personally, investing in CEPH seems to me to be a better choice. In order to fulfill the requirement of dedicated, I can also only run selected VMs in a pool or I can easily set up an actually self-sufficient CEPH in just 3 minutes using Croit.
Yes, if you're only need the storage for PVE, I'd also go with CEPH and at least 24 SFF per node for vertical scaling.


So you recommend using FC and not iSCSI... is the lower latency evident? Or for some other reason?
Dedicated network an easier setup.

If you don't use Snapshot, do you only work with backups?
Yes an no. We also have a virtualized HA-ZFS storage for ZFS-over-iSCSI on LVM.
 
  • Like
Reactions: Daniel Wagner

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!